Feb 18 19:18:16 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 19:18:16 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:16 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:18:17 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 19:18:17 crc kubenswrapper[4754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:18:17 crc kubenswrapper[4754]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 19:18:17 crc kubenswrapper[4754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:18:17 crc kubenswrapper[4754]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:18:17 crc kubenswrapper[4754]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 19:18:17 crc kubenswrapper[4754]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.967552 4754 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975542 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975596 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975602 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975609 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975615 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975621 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975630 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975636 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975641 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975647 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975653 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975658 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975663 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975668 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975673 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975677 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975681 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975686 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975690 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975695 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975700 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975705 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975709 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975715 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975720 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975724 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975729 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975734 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975738 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975742 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975747 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975777 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975786 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975794 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975801 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975810 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975816 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975824 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975832 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975838 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975845 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975851 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975857 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975863 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975869 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975873 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975878 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975884 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975889 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975894 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975899 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975905 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975910 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975916 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975924 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975929 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975934 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975939 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975947 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975953 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975958 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975963 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975968 4754 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975973 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975978 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975984 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975989 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975994 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.975999 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.976004 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.976012 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976161 4754 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976175 4754 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976188 4754 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976197 4754 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976206 4754 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976212 4754 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976221 4754 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976229 4754 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976236 4754 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976242 4754 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976248 4754 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976257 4754 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976265 4754 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976272 4754 flags.go:64] FLAG: --cgroup-root="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976277 4754 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976286 4754 flags.go:64] FLAG: --client-ca-file="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976292 4754 flags.go:64] FLAG: --cloud-config="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976297 4754 flags.go:64] FLAG: --cloud-provider="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976304 4754 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976320 4754 flags.go:64] FLAG: --cluster-domain="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976326 4754 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976333 4754 flags.go:64] FLAG: --config-dir="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976339 4754 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976346 4754 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976362 4754 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976369 4754 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976376 4754 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976383 4754 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976391 4754 flags.go:64] FLAG: --contention-profiling="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976398 4754 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976406 4754 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976413 4754 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976419 4754 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976427 4754 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976434 4754 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976441 4754 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976447 4754 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976453 4754 flags.go:64] FLAG: --enable-server="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976459 4754 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976469 4754 flags.go:64] FLAG: --event-burst="100" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976475 4754 flags.go:64] FLAG: --event-qps="50" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976481 4754 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976487 4754 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976494 4754 flags.go:64] FLAG: --eviction-hard="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976502 4754 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976508 4754 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976514 4754 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976526 4754 flags.go:64] FLAG: --eviction-soft="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976533 4754 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976539 4754 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976546 4754 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976556 4754 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976563 4754 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976569 4754 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976575 4754 flags.go:64] FLAG: --feature-gates="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976583 4754 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976590 4754 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976597 4754 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976604 4754 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976610 4754 flags.go:64] FLAG: --healthz-port="10248" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976616 4754 flags.go:64] FLAG: --help="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976623 4754 flags.go:64] FLAG: --hostname-override="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976629 4754 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976635 4754 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976641 4754 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976647 4754 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976653 4754 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976659 4754 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976665 4754 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976671 4754 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976676 4754 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976682 4754 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976688 4754 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976694 4754 flags.go:64] FLAG: --kube-reserved="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976699 4754 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976705 4754 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976711 4754 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976716 4754 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976722 4754 flags.go:64] FLAG: --lock-file="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976727 4754 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976733 4754 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976740 4754 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976751 4754 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976765 4754 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976771 4754 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976776 4754 flags.go:64] FLAG: --logging-format="text" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976782 4754 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976789 4754 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976795 4754 flags.go:64] FLAG: --manifest-url="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976801 4754 flags.go:64] FLAG: --manifest-url-header="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976809 4754 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976815 4754 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976825 4754 flags.go:64] FLAG: --max-pods="110" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976832 4754 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976839 4754 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976845 4754 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976854 4754 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976861 4754 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976867 4754 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976873 4754 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976890 4754 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976896 4754 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976903 4754 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976909 4754 flags.go:64] FLAG: --pod-cidr="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976914 4754 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976923 4754 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976929 4754 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976935 4754 flags.go:64] FLAG: --pods-per-core="0" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976941 4754 flags.go:64] FLAG: --port="10250" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976947 4754 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976952 4754 flags.go:64] FLAG: --provider-id="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976958 4754 flags.go:64] FLAG: --qos-reserved="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976964 4754 flags.go:64] FLAG: --read-only-port="10255" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976970 4754 flags.go:64] FLAG: --register-node="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976976 4754 flags.go:64] FLAG: --register-schedulable="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976982 4754 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.976996 4754 flags.go:64] FLAG: --registry-burst="10" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977004 4754 flags.go:64] FLAG: --registry-qps="5" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977011 4754 flags.go:64] FLAG: --reserved-cpus="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977018 4754 flags.go:64] FLAG: --reserved-memory="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977026 4754 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977031 4754 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977038 4754 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977044 4754 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977051 4754 flags.go:64] FLAG: --runonce="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977057 4754 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977064 4754 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977070 4754 flags.go:64] FLAG: --seccomp-default="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977076 4754 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977081 4754 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977088 4754 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977093 4754 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977100 4754 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977106 4754 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977112 4754 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977119 4754 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977125 4754 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977130 4754 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977163 4754 flags.go:64] FLAG: --system-cgroups="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977170 4754 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977182 4754 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977188 4754 flags.go:64] FLAG: --tls-cert-file="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977194 4754 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977203 4754 flags.go:64] FLAG: --tls-min-version="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977209 4754 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977216 4754 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977222 4754 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977228 4754 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977234 4754 flags.go:64] FLAG: --v="2" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977249 4754 flags.go:64] FLAG: --version="false" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977256 4754 flags.go:64] FLAG: --vmodule="" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977264 4754 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977269 4754 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977401 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977408 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977414 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977418 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977422 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977426 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977430 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977434 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977438 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977443 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977447 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977452 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977455 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977459 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977464 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977468 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977473 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977476 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977481 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977485 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977490 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977495 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977499 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977504 4754 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977508 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977512 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977517 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977522 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977527 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977531 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977536 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977540 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977545 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977549 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977557 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977561 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977566 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977570 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977575 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977579 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977583 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977587 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977591 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977596 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977600 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977605 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977610 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977615 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977619 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977624 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977628 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977632 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977636 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977640 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977644 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977648 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977653 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977657 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977661 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977665 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977669 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977674 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977679 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977684 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977689 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977694 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977702 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977706 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977712 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977717 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.977722 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.977739 4754 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.987478 4754 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.987522 4754 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987626 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987636 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987643 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987649 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987656 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987661 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987667 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987673 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987679 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987687 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987693 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987702 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987709 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987714 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987720 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987726 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987731 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987737 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987742 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987748 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987753 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987759 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987764 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987769 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987776 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987785 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987790 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987796 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987802 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987809 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987816 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987823 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987829 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987834 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987840 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987845 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987850 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987856 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987861 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987866 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987873 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987880 4754 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987887 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987894 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987901 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987907 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987913 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987920 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987926 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987932 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987939 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987946 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987952 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987958 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987964 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987970 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987976 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987981 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987986 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987991 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.987997 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988002 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988007 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988012 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988018 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988026 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988032 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988039 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988044 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988051 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988057 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.988066 4754 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988246 4754 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988255 4754 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988263 4754 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988268 4754 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988274 4754 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988281 4754 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988288 4754 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988295 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988302 4754 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988347 4754 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988354 4754 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988359 4754 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988364 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988370 4754 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988376 4754 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988381 4754 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988387 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988392 4754 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988397 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988402 4754 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988408 4754 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988413 4754 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988419 4754 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988424 4754 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988430 4754 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988435 4754 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988441 4754 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988446 4754 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988451 4754 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988458 4754 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988464 4754 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988471 4754 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988477 4754 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988483 4754 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988489 4754 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988495 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988500 4754 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988507 4754 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988512 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988518 4754 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988523 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988528 4754 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988534 4754 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988539 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988544 4754 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988549 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988555 4754 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988560 4754 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988565 4754 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988571 4754 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988576 4754 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988582 4754 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988587 4754 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988593 4754 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988598 4754 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988604 4754 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988611 4754 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988616 4754 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988622 4754 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988627 4754 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988632 4754 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988637 4754 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988788 4754 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988795 4754 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988800 4754 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988807 4754 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988812 4754 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988818 4754 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988823 4754 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988829 4754 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:18:17 crc kubenswrapper[4754]: W0218 19:18:17.988834 4754 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.988842 4754 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.989050 4754 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.994301 4754 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.994400 4754 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.996192 4754 server.go:997] "Starting client certificate rotation" Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.996220 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.996483 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-21 15:00:19.945089464 +0000 UTC Feb 18 19:18:17 crc kubenswrapper[4754]: I0218 19:18:17.996637 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.022898 4754 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.027444 4754 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.027889 4754 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.047675 4754 log.go:25] "Validated CRI v1 runtime API" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.085850 4754 log.go:25] "Validated CRI v1 image API" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.088306 4754 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.093937 4754 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-19-13-10-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.093989 4754 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.112208 4754 manager.go:217] Machine: {Timestamp:2026-02-18 19:18:18.108704248 +0000 UTC m=+0.559117064 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bca81bce-8907-42d1-98a5-0dfb89b9f859 BootID:8b2b83d7-b7bf-4d49-9f49-d7ce420be65a Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:65:2d:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:65:2d:df Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:53:4d:c9 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:29:78:f7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1a:89:7c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d7:f6:15 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:bd:88:0e:13:95 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:14:6e:02:d2:99 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.112545 4754 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.112753 4754 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.117063 4754 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.117382 4754 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.117436 4754 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.117727 4754 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.117744 4754 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.118458 4754 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.118508 4754 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.118837 4754 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.118962 4754 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.123008 4754 kubelet.go:418] "Attempting to sync node with API server" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.123039 4754 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.123064 4754 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.123082 4754 kubelet.go:324] "Adding apiserver pod source" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.123097 4754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.127339 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.127481 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.127339 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.127580 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.129256 4754 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.130244 4754 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.132323 4754 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134433 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134466 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134476 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134486 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134504 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134514 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134525 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134540 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134553 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134564 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134577 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.134587 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.136880 4754 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.137560 4754 server.go:1280] "Started kubelet" Feb 18 19:18:18 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.144857 4754 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.145069 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.144807 4754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.146581 4754 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.146937 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.146992 4754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.147230 4754 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.147474 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:10:08.819092129 +0000 UTC Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.147968 4754 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.148010 4754 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.148375 4754 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.149085 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.149355 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.150798 4754 factory.go:55] Registering systemd factory Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.150848 4754 factory.go:221] Registration of the systemd container factory successfully Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.151303 4754 factory.go:153] Registering CRI-O factory Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.151332 4754 factory.go:221] Registration of the crio container factory successfully Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.151478 4754 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.151505 4754 factory.go:103] Registering Raw factory Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.151531 4754 manager.go:1196] Started watching for new ooms in manager Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.151789 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.156636 4754 manager.go:319] Starting recovery of all containers Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.158397 4754 server.go:460] "Adding debug handlers to kubelet server" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166234 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166300 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166318 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166334 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166351 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166370 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166383 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166423 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166441 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166458 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166480 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166493 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166507 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166525 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166537 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166584 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166605 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166619 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166633 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166644 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166662 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166675 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166689 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166702 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166715 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166728 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166800 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166815 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166826 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166839 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166881 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166898 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166912 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166926 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166942 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166964 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.166983 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.163264 4754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956d6278969d44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:18:18.13751738 +0000 UTC m=+0.587930186,LastTimestamp:2026-02-18 19:18:18.13751738 +0000 UTC m=+0.587930186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167004 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167100 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167133 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167164 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167175 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167192 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167203 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167213 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167223 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167235 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167245 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167256 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167267 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167277 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167286 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167304 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167316 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167326 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167341 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167352 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167392 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167403 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167412 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167422 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167434 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167446 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167456 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167465 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167474 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167484 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167494 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167502 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167513 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167522 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167532 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167541 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167550 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167585 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167597 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167608 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167617 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167626 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167642 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167653 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167665 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167676 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167686 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167694 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167703 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167714 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167723 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167731 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167743 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167777 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167786 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167796 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167809 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167819 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167828 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167838 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167850 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167860 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167870 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167886 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167897 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167908 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167918 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167935 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167946 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167960 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167971 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167982 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.167992 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168008 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168019 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168030 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168042 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168055 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168071 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168084 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168093 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168103 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168113 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168123 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168164 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168178 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168188 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168197 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168207 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168225 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168239 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168251 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168267 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168277 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168287 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168297 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168307 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168317 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168326 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168337 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168347 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168356 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.168367 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.169921 4754 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.169948 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.169960 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.169970 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.169984 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170019 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170029 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170040 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170050 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170068 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170080 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170091 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170101 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170132 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170160 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170172 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170182 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170192 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170203 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170213 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170224 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170254 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170273 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170282 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170291 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170363 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170373 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170382 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170391 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170451 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170461 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170470 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170479 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170489 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170499 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170509 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170518 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170552 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170562 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170573 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170586 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170600 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170651 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170663 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170695 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170740 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170750 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170762 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170771 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170784 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170793 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170802 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170812 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170847 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170900 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170949 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170960 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170969 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170977 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170986 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.170995 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171077 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171091 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171102 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171113 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171125 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171138 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171241 4754 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171271 4754 reconstruct.go:97] "Volume reconstruction finished" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.171279 4754 reconciler.go:26] "Reconciler: start to sync state" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.193692 4754 manager.go:324] Recovery completed Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.206350 4754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.208273 4754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.208314 4754 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.208413 4754 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.208545 4754 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.209447 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.209518 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.212611 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.214444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.214491 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.214501 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.216701 4754 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.216726 4754 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.216748 4754 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.234849 4754 policy_none.go:49] "None policy: Start" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.237128 4754 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.237452 4754 state_mem.go:35] "Initializing new in-memory state store" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.247553 4754 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.289446 4754 manager.go:334] "Starting Device Plugin manager" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.289500 4754 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.289513 4754 server.go:79] "Starting device plugin registration server" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.290184 4754 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.290206 4754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.290501 4754 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.290583 4754 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.290591 4754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.299508 4754 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.308710 4754 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.308815 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.310356 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.310440 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.310462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.310749 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.311821 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.311896 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312274 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312311 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312322 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312495 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312713 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312774 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312923 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312947 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.312957 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.313711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.313731 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.313713 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.313754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.313764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.313740 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.314094 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.314168 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.314190 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.314912 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.314936 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.314945 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.315096 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.315111 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.315120 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.315318 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.315501 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.315548 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316089 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316118 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316129 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316314 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316339 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316767 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316775 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.316989 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.317017 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.317029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.353166 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373831 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373892 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373925 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373980 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.373998 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374039 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374130 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374180 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374241 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374269 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374301 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.374323 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.390980 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.392689 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.392731 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.392742 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.392773 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.393377 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.475379 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.475783 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.475894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.475634 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.475888 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476130 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476268 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476300 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476339 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476372 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476403 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476434 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476465 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476531 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476661 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476819 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476685 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476705 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476759 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476764 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476780 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476789 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476800 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476811 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.476659 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.593853 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.595468 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.595502 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.595512 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.595559 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.595987 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.666118 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.685008 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.702353 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.705440 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e2d3f152f2954f3cbd70abc0996b7efee40fef16888dfe0b8836e34d3f560f09 WatchSource:0}: Error finding container e2d3f152f2954f3cbd70abc0996b7efee40fef16888dfe0b8836e34d3f560f09: Status 404 returned error can't find the container with id e2d3f152f2954f3cbd70abc0996b7efee40fef16888dfe0b8836e34d3f560f09 Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.716837 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-534fffd3edef01faa53eecbc47ffa43e38aeced12461394bb44d9d21d1004868 WatchSource:0}: Error finding container 534fffd3edef01faa53eecbc47ffa43e38aeced12461394bb44d9d21d1004868: Status 404 returned error can't find the container with id 534fffd3edef01faa53eecbc47ffa43e38aeced12461394bb44d9d21d1004868 Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.718100 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.722367 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.728907 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3b45be7c64ae844a5cce2a4374dc906b5d82c7feb3eb7adefc969d6a0aef5575 WatchSource:0}: Error finding container 3b45be7c64ae844a5cce2a4374dc906b5d82c7feb3eb7adefc969d6a0aef5575: Status 404 returned error can't find the container with id 3b45be7c64ae844a5cce2a4374dc906b5d82c7feb3eb7adefc969d6a0aef5575 Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.746316 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b41064e4e4034d76071331d9229291f06c9ae3a8920e720b964b9f198b97e803 WatchSource:0}: Error finding container b41064e4e4034d76071331d9229291f06c9ae3a8920e720b964b9f198b97e803: Status 404 returned error can't find the container with id b41064e4e4034d76071331d9229291f06c9ae3a8920e720b964b9f198b97e803 Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.754354 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 18 19:18:18 crc kubenswrapper[4754]: W0218 19:18:18.755279 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-24500ab30d434f134c7607f6822880db05bf3e2d492b21ea378e5f65b0d3964d WatchSource:0}: Error finding container 24500ab30d434f134c7607f6822880db05bf3e2d492b21ea378e5f65b0d3964d: Status 404 returned error can't find the container with id 24500ab30d434f134c7607f6822880db05bf3e2d492b21ea378e5f65b0d3964d Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.996554 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.997988 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.998034 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.998044 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4754]: I0218 19:18:18.998077 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:18 crc kubenswrapper[4754]: E0218 19:18:18.998625 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.146697 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.147749 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:34:20.835963734 +0000 UTC Feb 18 19:18:19 crc kubenswrapper[4754]: W0218 19:18:19.167574 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:19 crc kubenswrapper[4754]: E0218 19:18:19.167658 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.214126 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"24500ab30d434f134c7607f6822880db05bf3e2d492b21ea378e5f65b0d3964d"} Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.215419 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b41064e4e4034d76071331d9229291f06c9ae3a8920e720b964b9f198b97e803"} Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.216464 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3b45be7c64ae844a5cce2a4374dc906b5d82c7feb3eb7adefc969d6a0aef5575"} Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.217383 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"534fffd3edef01faa53eecbc47ffa43e38aeced12461394bb44d9d21d1004868"} Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.218492 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e2d3f152f2954f3cbd70abc0996b7efee40fef16888dfe0b8836e34d3f560f09"} Feb 18 19:18:19 crc kubenswrapper[4754]: W0218 19:18:19.369868 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:19 crc kubenswrapper[4754]: E0218 19:18:19.370480 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:19 crc kubenswrapper[4754]: W0218 19:18:19.495082 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:19 crc kubenswrapper[4754]: E0218 19:18:19.495224 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:19 crc kubenswrapper[4754]: W0218 19:18:19.545934 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:19 crc kubenswrapper[4754]: E0218 19:18:19.546018 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:19 crc kubenswrapper[4754]: E0218 19:18:19.555216 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.798993 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.800488 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.800591 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.800604 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4754]: I0218 19:18:19.800628 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:19 crc kubenswrapper[4754]: E0218 19:18:19.801497 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.146578 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.148610 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:01:38.938549661 +0000 UTC Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.208760 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:18:20 crc kubenswrapper[4754]: E0218 19:18:20.210131 4754 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.224635 4754 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246" exitCode=0 Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.224706 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.224775 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.226186 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.226239 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.226258 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.227959 4754 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b" exitCode=0 Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.228079 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.228294 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.229755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.229793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.229810 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.233432 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.233391 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.233595 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.233614 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.233626 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.234703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.234746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.234760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.237760 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b" exitCode=0 Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.237848 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.237992 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.239606 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.239680 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.239711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.242967 4754 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083" exitCode=0 Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.243029 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083"} Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.243193 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.243886 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.244768 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.244814 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.244834 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.246469 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.246542 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.246568 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4754]: I0218 19:18:20.762435 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:21 crc kubenswrapper[4754]: W0218 19:18:21.101199 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:21 crc kubenswrapper[4754]: E0218 19:18:21.101300 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.146900 4754 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.149067 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:41:58.858853108 +0000 UTC Feb 18 19:18:21 crc kubenswrapper[4754]: W0218 19:18:21.152052 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:21 crc kubenswrapper[4754]: E0218 19:18:21.152199 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:21 crc kubenswrapper[4754]: E0218 19:18:21.156193 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.252579 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.252640 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.252651 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.252662 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.258431 4754 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf" exitCode=0 Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.258589 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.258647 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.261697 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.261739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.261755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.266742 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.266852 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.268184 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.268216 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.268228 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.271335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.271369 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.271380 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058"} Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.271371 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.271425 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.272682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.272691 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.272764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.272778 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.272730 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.272847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4754]: W0218 19:18:21.288698 4754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Feb 18 19:18:21 crc kubenswrapper[4754]: E0218 19:18:21.288915 4754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.313420 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.314615 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.401998 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.403315 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.403358 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.403369 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4754]: I0218 19:18:21.403408 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:21 crc kubenswrapper[4754]: E0218 19:18:21.403938 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.149621 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:08:23.63779628 +0000 UTC Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.278588 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.279170 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2"} Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.280682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.280938 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.281059 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.284546 4754 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6" exitCode=0 Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.284857 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.285261 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6"} Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.285459 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.285480 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.285489 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286194 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286675 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286651 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286783 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.286798 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.287422 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.287461 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.287473 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.287701 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.287850 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4754]: I0218 19:18:22.287983 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.150549 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:41:49.695762098 +0000 UTC Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.156542 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291707 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3e6de71f467cb3ca8c53b98156e5dcc3fcf875f6c3e51dda3cd972201f1dff27"} Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291755 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9b4594d0d5f1a9342ac7af89120cafdc12a4313bb9590198916f5da4cc2f6591"} Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291766 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46b77cef12b8c3593dfa85d7822513c52fa384ef7cfe71e30f24300271eb730a"} Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b0fb3d296168dd7b2584edeb7c9bcb692b389837c1d6e7848a30ae36b1fca86"} Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291806 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291853 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291888 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.291909 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.292888 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.292912 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.292922 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.293226 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.293249 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.293261 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.293250 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.293343 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4754]: I0218 19:18:23.293351 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.151275 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:28:20.035921267 +0000 UTC Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.262206 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.300845 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"50ce9e45fb31732d884c6570779abd8e272b02d032aeaec08779843c2667c4dd"} Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.300959 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.301016 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.302547 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.302652 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.302696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.305658 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.305697 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.305710 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.313493 4754 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.313779 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.604990 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.607211 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.607291 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.607310 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4754]: I0218 19:18:24.607350 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.026762 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.151896 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:18:34.453879532 +0000 UTC Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.303769 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.303958 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.305376 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.305435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.305437 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.305462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.305565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4754]: I0218 19:18:25.305598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4754]: I0218 19:18:26.152329 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:16:44.948060936 +0000 UTC Feb 18 19:18:27 crc kubenswrapper[4754]: I0218 19:18:27.152861 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:40:16.558696826 +0000 UTC Feb 18 19:18:28 crc kubenswrapper[4754]: I0218 19:18:28.153591 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:22:13.978460697 +0000 UTC Feb 18 19:18:28 crc kubenswrapper[4754]: E0218 19:18:28.299629 4754 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:18:28 crc kubenswrapper[4754]: I0218 19:18:28.306265 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 19:18:28 crc kubenswrapper[4754]: I0218 19:18:28.306498 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:28 crc kubenswrapper[4754]: I0218 19:18:28.307545 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4754]: I0218 19:18:28.307593 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4754]: I0218 19:18:28.307608 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.154000 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:12:10.814329183 +0000 UTC Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.229865 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.230070 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.231269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.231296 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.231306 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.234298 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.314923 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.316088 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.316126 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.316156 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4754]: I0218 19:18:29.322661 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:30 crc kubenswrapper[4754]: I0218 19:18:30.155158 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:34:12.972204839 +0000 UTC Feb 18 19:18:30 crc kubenswrapper[4754]: I0218 19:18:30.318296 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:30 crc kubenswrapper[4754]: I0218 19:18:30.319742 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4754]: I0218 19:18:30.319785 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4754]: I0218 19:18:30.319799 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4754]: I0218 19:18:31.155410 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:21:58.756763508 +0000 UTC Feb 18 19:18:31 crc kubenswrapper[4754]: I0218 19:18:31.932564 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52892->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:18:31 crc kubenswrapper[4754]: I0218 19:18:31.932638 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52892->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:18:31 crc kubenswrapper[4754]: I0218 19:18:31.994355 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:18:31 crc kubenswrapper[4754]: I0218 19:18:31.994438 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.007205 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.007272 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.155872 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:45:52.426129016 +0000 UTC Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.323907 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.325290 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2" exitCode=255 Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.325334 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2"} Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.325487 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.326226 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.326263 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.326275 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.326790 4754 scope.go:117] "RemoveContainer" containerID="41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.600351 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.600779 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.602091 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.602161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.602175 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4754]: I0218 19:18:32.637383 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.156484 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:46:39.031964453 +0000 UTC Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.160421 4754 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]log ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]etcd ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/priority-and-fairness-filter ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-apiextensions-informers ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-apiextensions-controllers ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/crd-informer-synced ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-system-namespaces-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 18 19:18:33 crc kubenswrapper[4754]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/bootstrap-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/start-kube-aggregator-informers ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-registration-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-discovery-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]autoregister-completion ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-openapi-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 18 19:18:33 crc kubenswrapper[4754]: livez check failed Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.160483 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.317786 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.329508 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.331415 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459"} Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.331493 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.331556 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.332489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.332513 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.332524 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.333266 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.333290 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4754]: I0218 19:18:33.333299 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.156674 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:26:04.069796968 +0000 UTC Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.314023 4754 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.314161 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.334872 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.336351 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.336392 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4754]: I0218 19:18:34.336408 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4754]: I0218 19:18:35.157437 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:52:30.383404046 +0000 UTC Feb 18 19:18:36 crc kubenswrapper[4754]: I0218 19:18:36.158346 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:10:31.835414144 +0000 UTC Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.009134 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.011293 4754 trace.go:236] Trace[1087930148]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:18:22.507) (total time: 14503ms): Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[1087930148]: ---"Objects listed" error: 14503ms (19:18:37.011) Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[1087930148]: [14.50376941s] [14.50376941s] END Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.011324 4754 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.011539 4754 trace.go:236] Trace[1782771468]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:18:24.695) (total time: 12316ms): Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[1782771468]: ---"Objects listed" error: 12316ms (19:18:37.011) Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[1782771468]: [12.316382856s] [12.316382856s] END Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.011549 4754 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.018594 4754 trace.go:236] Trace[485242795]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:18:25.676) (total time: 11342ms): Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[485242795]: ---"Objects listed" error: 11341ms (19:18:37.018) Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[485242795]: [11.342076169s] [11.342076169s] END Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.018641 4754 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.019353 4754 trace.go:236] Trace[667016972]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:18:26.952) (total time: 10066ms): Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[667016972]: ---"Objects listed" error: 10066ms (19:18:37.019) Feb 18 19:18:37 crc kubenswrapper[4754]: Trace[667016972]: [10.066567287s] [10.066567287s] END Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.019370 4754 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.019728 4754 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.019951 4754 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.030628 4754 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.135045 4754 apiserver.go:52] "Watching apiserver" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.138578 4754 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.138931 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.139291 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.139467 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.139536 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.139689 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.139703 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.139848 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.139871 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.140090 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.139837 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.142630 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.143995 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.144194 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.144454 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.144718 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.144260 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.145030 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.147246 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.147789 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.149633 4754 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.158657 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:27:11.589408789 +0000 UTC Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.172611 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.189273 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.199057 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.214872 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.224876 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.231873 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.231922 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.231940 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.231963 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.231983 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232001 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232018 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232037 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232055 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232106 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232160 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232179 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232220 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232241 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232258 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232276 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232292 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232313 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232334 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232352 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232356 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232360 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232368 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232456 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232477 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232493 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232523 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232544 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232566 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232586 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232588 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232639 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232659 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232674 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232678 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232773 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232794 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232834 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232849 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232839 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232926 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232952 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232974 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.232999 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233010 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233021 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233028 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233048 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233098 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233121 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233167 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233173 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233221 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233255 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233284 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233306 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233330 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233331 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233355 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233384 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233385 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233409 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233433 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233456 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233481 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233481 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233506 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233471 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233531 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233529 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233561 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233590 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233618 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233645 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233670 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233696 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233730 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233749 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233766 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233783 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233782 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233802 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233821 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233839 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233857 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233880 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233912 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233940 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233966 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233966 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.233989 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234175 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234305 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234339 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234367 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234391 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234447 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234450 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234528 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234559 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234637 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234733 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234824 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234845 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234930 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234956 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.234981 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235000 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235020 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235039 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235061 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235084 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235113 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235133 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235167 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235192 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235211 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235231 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235249 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235259 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235267 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235316 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235458 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235485 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235512 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235469 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235535 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235710 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235782 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235690 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.235909 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:37.735884774 +0000 UTC m=+20.186297570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235564 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235961 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235971 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235993 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236021 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236050 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236042 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236076 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236068 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236102 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236126 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236168 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236199 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236204 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236285 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236380 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236400 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.235618 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236221 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236451 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236558 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236590 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236593 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236641 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236670 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236705 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236732 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236736 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236759 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236790 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236815 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236842 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236863 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236890 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236908 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236930 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236948 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236966 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.236985 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237004 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237023 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237020 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237040 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237057 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237076 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237094 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237116 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237136 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237168 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237189 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237242 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237261 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237280 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237298 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237316 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237334 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237355 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237377 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237396 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237415 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237472 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237510 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237526 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237543 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237570 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237586 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237602 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237617 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237632 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237648 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237663 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237679 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237695 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237713 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237729 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237746 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237760 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237776 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237791 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237807 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237823 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237844 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237860 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237876 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237892 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237912 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237929 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237945 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237961 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237977 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237992 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238018 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238034 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238053 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238068 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238085 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238104 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238122 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238155 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238172 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238187 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238261 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238302 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238332 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238352 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238370 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238403 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238420 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238458 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238549 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238568 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238612 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238672 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238690 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238707 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238806 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238825 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238857 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239838 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239852 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239863 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239873 4754 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240322 4754 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240343 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240353 4754 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240363 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240373 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240396 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240406 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240416 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240427 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240438 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240448 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240458 4754 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240468 4754 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240479 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240489 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240499 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240509 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240519 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240528 4754 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240538 4754 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240563 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240572 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240582 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240591 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240600 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240660 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240670 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240680 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240689 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240700 4754 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240714 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240735 4754 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240744 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237086 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237341 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237468 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237531 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.254069 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237659 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237818 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.237898 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238036 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238071 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238099 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238099 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238248 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238440 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238489 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238488 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238858 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.238897 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239118 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239257 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239367 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239330 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239428 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239864 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239968 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.239963 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240033 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240465 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240476 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.240599 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241186 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241312 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241321 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241324 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241445 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241523 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241647 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241708 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241707 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241748 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.241769 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.242006 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.242098 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.242211 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.242689 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243007 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243247 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243272 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243545 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243678 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243687 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243682 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.243825 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.244044 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.244375 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.244444 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.244969 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.244998 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.245062 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.245134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.245265 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.245508 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.245913 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.246566 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.242707 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.247692 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.247760 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249043 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249088 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.248754 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249127 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.247962 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249525 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249545 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249564 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249573 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249767 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249852 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.249882 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251006 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251311 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251352 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251369 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251552 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251636 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251789 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251632 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252185 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252184 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252195 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252298 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252302 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.251797 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252439 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252474 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252516 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252730 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.252816 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.252991 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.253000 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.253044 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.253225 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.253294 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.253809 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.253848 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.253987 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.254089 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.254267 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.254793 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.254815 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.254817 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.254845 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.254835 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:37.754787653 +0000 UTC m=+20.205200449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.255507 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.255613 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:37.755585135 +0000 UTC m=+20.205997931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.256243 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.256945 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.257002 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.257328 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.257472 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.257509 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:37.757474428 +0000 UTC m=+20.207887274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.257933 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.258184 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.258560 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.258587 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.258870 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.259767 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.259789 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.260134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.260226 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.260303 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.260428 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.261334 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.261594 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.261691 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.262032 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.263181 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.264495 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.264553 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.264732 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.265111 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.265243 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.265853 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.266232 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.267374 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.268266 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.268279 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.268348 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.268360 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.268408 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:37.768390694 +0000 UTC m=+20.218803490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.269578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.269894 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.271003 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.271681 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.274587 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.275991 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.276107 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.276425 4754 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.277012 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.278668 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.278813 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.278900 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.280023 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.280804 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.282814 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.282835 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.285482 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.285702 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.288981 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.290802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.294109 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.317792 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.325042 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.325220 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341252 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341368 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341436 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341450 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341463 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341474 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341464 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341484 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341611 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341637 4754 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341676 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341698 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341711 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341724 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341777 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341794 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341808 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341821 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341834 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341846 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341858 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341870 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341881 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341893 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341906 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341918 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341930 4754 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341943 4754 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341955 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341969 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341980 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.341992 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342004 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342016 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342027 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342053 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342065 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342076 4754 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342088 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342099 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342110 4754 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342126 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342154 4754 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342167 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342178 4754 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342191 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342203 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342214 4754 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342225 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342292 4754 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342306 4754 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342320 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342334 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342349 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342362 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342375 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342389 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342402 4754 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342415 4754 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342428 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342444 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342461 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342473 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342484 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342495 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342507 4754 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342518 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342532 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342543 4754 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342554 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342566 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342577 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342600 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342613 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342624 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342635 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342646 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342658 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342669 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342681 4754 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342693 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342705 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342717 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342731 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342743 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342755 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342768 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342779 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342790 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342801 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342813 4754 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342823 4754 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342835 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342849 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342860 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342871 4754 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342881 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342892 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342905 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342917 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342929 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342941 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342955 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342966 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342978 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.342989 4754 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343001 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343012 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343023 4754 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343035 4754 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343048 4754 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343060 4754 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343071 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343082 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343093 4754 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343106 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343117 4754 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343130 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343158 4754 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343169 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343182 4754 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343196 4754 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343214 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343230 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343243 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343254 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343266 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343283 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343295 4754 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343307 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343319 4754 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343330 4754 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343347 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343364 4754 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343376 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343389 4754 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343404 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343420 4754 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343434 4754 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343452 4754 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343467 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343479 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343490 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343505 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343517 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343529 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343540 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343552 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343563 4754 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343579 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343595 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343609 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343621 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343633 4754 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343647 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343662 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.343678 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.452934 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.463067 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.468579 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:18:37 crc kubenswrapper[4754]: W0218 19:18:37.479453 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b1cad14d7152afbacf0b834cc677715d354c50a0dbb013d4d81e3273540e3a5a WatchSource:0}: Error finding container b1cad14d7152afbacf0b834cc677715d354c50a0dbb013d4d81e3273540e3a5a: Status 404 returned error can't find the container with id b1cad14d7152afbacf0b834cc677715d354c50a0dbb013d4d81e3273540e3a5a Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.747369 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.747461 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:38.747432053 +0000 UTC m=+21.197844849 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.849031 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.849095 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.849203 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:37 crc kubenswrapper[4754]: I0218 19:18:37.849232 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849344 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849465 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849481 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849624 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849664 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849479 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849705 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849720 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849500 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:38.849471101 +0000 UTC m=+21.299883947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849801 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:38.84978329 +0000 UTC m=+21.300196086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849830 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:38.849819181 +0000 UTC m=+21.300231977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:37 crc kubenswrapper[4754]: E0218 19:18:37.849848 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:38.849843182 +0000 UTC m=+21.300255978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.159765 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:48:33.158611132 +0000 UTC Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.163173 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.163538 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.167595 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.172962 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.175863 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.183377 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.194478 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.210916 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.213007 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.213604 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.214484 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.215082 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.215977 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.217391 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.217990 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.218929 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.219529 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.220409 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.220888 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.221949 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.222465 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.222977 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.223882 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.224430 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.225359 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.225745 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.226352 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.227347 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.227781 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.228329 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.229115 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.229776 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.230230 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.230638 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.231339 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.232347 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.232807 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.233792 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.234392 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.234847 4754 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.235348 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.236914 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.237393 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.238281 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.239722 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.240643 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.241616 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.242372 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.243401 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.243880 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.244812 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.245525 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.246479 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.247011 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.247080 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.247929 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.248475 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.249558 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.250047 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.250881 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.251383 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.251893 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.252910 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.253619 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.265916 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.287575 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.304954 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.324445 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.352961 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.354220 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.355041 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.357118 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459" exitCode=255 Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.357211 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.357345 4754 scope.go:117] "RemoveContainer" containerID="41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.357727 4754 scope.go:117] "RemoveContainer" containerID="92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459" Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.357891 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.363801 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.363861 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.363879 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"38e57faf661901998836302ab06e2ceb81aa4204788de6892cb2e64238e9a0a4"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.366162 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b1cad14d7152afbacf0b834cc677715d354c50a0dbb013d4d81e3273540e3a5a"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.367807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.367848 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c971165cfe1d1e8c1726bedc042f51fdae5e049054eef7790cc01365caf1d02e"} Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.389599 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.407638 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:31Z\\\",\\\"message\\\":\\\"W0218 19:18:21.454580 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:18:21.455001 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771442301 cert, and key in /tmp/serving-cert-2411503173/serving-signer.crt, /tmp/serving-cert-2411503173/serving-signer.key\\\\nI0218 19:18:21.720740 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:18:21.725332 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:18:21.725501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:21.726167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2411503173/tls.crt::/tmp/serving-cert-2411503173/tls.key\\\\\\\"\\\\nF0218 19:18:31.915770 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.423101 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.438823 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.460376 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:31Z\\\",\\\"message\\\":\\\"W0218 19:18:21.454580 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:18:21.455001 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771442301 cert, and key in /tmp/serving-cert-2411503173/serving-signer.crt, /tmp/serving-cert-2411503173/serving-signer.key\\\\nI0218 19:18:21.720740 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:18:21.725332 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:18:21.725501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:21.726167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2411503173/tls.crt::/tmp/serving-cert-2411503173/tls.key\\\\\\\"\\\\nF0218 19:18:31.915770 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.476772 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.492916 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.506101 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.524190 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.538501 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.554958 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.572564 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b0a12a18115542f6ba8b518f473921dc0a65be452b7f22f8d2cb599627a6e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:31Z\\\",\\\"message\\\":\\\"W0218 19:18:21.454580 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0218 19:18:21.455001 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771442301 cert, and key in /tmp/serving-cert-2411503173/serving-signer.crt, /tmp/serving-cert-2411503173/serving-signer.key\\\\nI0218 19:18:21.720740 1 observer_polling.go:159] Starting file observer\\\\nW0218 19:18:21.725332 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0218 19:18:21.725501 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:21.726167 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2411503173/tls.crt::/tmp/serving-cert-2411503173/tls.key\\\\\\\"\\\\nF0218 19:18:31.915770 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.593264 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.615538 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.633313 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.652157 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.757164 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.757342 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:40.757324753 +0000 UTC m=+23.207737549 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.858328 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.858373 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.858395 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:38 crc kubenswrapper[4754]: I0218 19:18:38.858415 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858493 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858574 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858614 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858607 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858630 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858769 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:40.858740234 +0000 UTC m=+23.309153090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858574 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858807 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858818 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.858847 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:40.858838437 +0000 UTC m=+23.309251303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.859049 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:40.859023392 +0000 UTC m=+23.309436188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:38 crc kubenswrapper[4754]: E0218 19:18:38.859086 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:40.859076273 +0000 UTC m=+23.309489149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.160547 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:01:02.129413561 +0000 UTC Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.209390 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.209400 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:39 crc kubenswrapper[4754]: E0218 19:18:39.209541 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.209418 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:39 crc kubenswrapper[4754]: E0218 19:18:39.209623 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:39 crc kubenswrapper[4754]: E0218 19:18:39.209760 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.371490 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.373615 4754 scope.go:117] "RemoveContainer" containerID="92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459" Feb 18 19:18:39 crc kubenswrapper[4754]: E0218 19:18:39.373780 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.392054 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.406474 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.421566 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.433415 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.443049 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.454522 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:39 crc kubenswrapper[4754]: I0218 19:18:39.464892 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:39Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.160923 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:37:11.843091357 +0000 UTC Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.378801 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a"} Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.397101 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.416926 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.439934 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.461496 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.484484 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.497395 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.513292 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:40Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.776820 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.777200 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:44.777169065 +0000 UTC m=+27.227581871 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.878122 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.878251 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.878300 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878336 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878434 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:44.8784066 +0000 UTC m=+27.328819406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: I0218 19:18:40.878338 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878503 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878528 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878627 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:44.878597725 +0000 UTC m=+27.329010521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878526 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878697 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878723 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878547 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878755 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878777 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:44.87876153 +0000 UTC m=+27.329174426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:40 crc kubenswrapper[4754]: E0218 19:18:40.878823 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:44.878794471 +0000 UTC m=+27.329207267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.161553 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:27:47.293798028 +0000 UTC Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.209379 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.209423 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.209403 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:41 crc kubenswrapper[4754]: E0218 19:18:41.209567 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:41 crc kubenswrapper[4754]: E0218 19:18:41.209671 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:41 crc kubenswrapper[4754]: E0218 19:18:41.209743 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.326761 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.332694 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.341191 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.351699 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.373809 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.393216 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.408432 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.425440 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.440817 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.454570 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.468911 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.484121 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.499407 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.514598 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.526975 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.542698 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.558402 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4754]: I0218 19:18:41.575402 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:42 crc kubenswrapper[4754]: I0218 19:18:42.161770 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:01:50.550410113 +0000 UTC Feb 18 19:18:42 crc kubenswrapper[4754]: I0218 19:18:42.212409 4754 csr.go:261] certificate signing request csr-gbk2v is approved, waiting to be issued Feb 18 19:18:42 crc kubenswrapper[4754]: I0218 19:18:42.240467 4754 csr.go:257] certificate signing request csr-gbk2v is issued Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.028766 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-z5qkd"] Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.029280 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.031579 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.031633 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.033436 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.035028 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-pp2q2"] Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.035453 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.035527 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wmjxr"] Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.035894 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-tpcwn"] Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.036029 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.045619 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.045693 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.045929 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.045949 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.046029 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.046367 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.046383 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.050546 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.050823 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.051008 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.051131 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.053579 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.053639 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.057304 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.073356 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.090673 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098615 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098679 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-cni-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098710 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-os-release\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098738 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-cnibin\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098765 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-rootfs\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098808 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkrdl\" (UniqueName: \"kubernetes.io/projected/1f810067-9720-4365-8d1b-8831300d10ae-kube-api-access-vkrdl\") pod \"node-resolver-z5qkd\" (UID: \"1f810067-9720-4365-8d1b-8831300d10ae\") " pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098837 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-kubelet\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098859 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-etc-kubernetes\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098886 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdps\" (UniqueName: \"kubernetes.io/projected/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-kube-api-access-sfdps\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098911 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-proxy-tls\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098936 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55244610-cf2e-4b72-b8b7-9d55898fbb62-cni-binary-copy\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098964 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-cni-multus\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.098999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-k8s-cni-cncf-io\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099027 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-hostroot\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099053 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-multus-certs\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099128 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-system-cni-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099192 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84dca4a4-85d4-442f-a34d-d12df5252a65-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099228 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-os-release\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099249 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-socket-dir-parent\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099272 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-system-cni-dir\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099299 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-cni-bin\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099338 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1f810067-9720-4365-8d1b-8831300d10ae-hosts-file\") pod \"node-resolver-z5qkd\" (UID: \"1f810067-9720-4365-8d1b-8831300d10ae\") " pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099357 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-mcd-auth-proxy-config\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099378 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-cnibin\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099401 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84dca4a4-85d4-442f-a34d-d12df5252a65-cni-binary-copy\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099430 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-netns\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099454 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-daemon-config\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099471 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtgvb\" (UniqueName: \"kubernetes.io/projected/55244610-cf2e-4b72-b8b7-9d55898fbb62-kube-api-access-xtgvb\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099499 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrjj4\" (UniqueName: \"kubernetes.io/projected/84dca4a4-85d4-442f-a34d-d12df5252a65-kube-api-access-mrjj4\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.099528 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-conf-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.105838 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.123612 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.137683 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.153871 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.162024 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:42:47.858925105 +0000 UTC Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.171614 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.181914 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.195915 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.200859 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-proxy-tls\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.200903 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55244610-cf2e-4b72-b8b7-9d55898fbb62-cni-binary-copy\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.200924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-cni-multus\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.200944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-hostroot\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.200962 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-multus-certs\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201193 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-k8s-cni-cncf-io\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201221 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-system-cni-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201245 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84dca4a4-85d4-442f-a34d-d12df5252a65-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-os-release\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201289 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-socket-dir-parent\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201344 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-system-cni-dir\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201363 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-cni-bin\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201385 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-cnibin\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201369 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-hostroot\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201408 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84dca4a4-85d4-442f-a34d-d12df5252a65-cni-binary-copy\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201491 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-multus-certs\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201517 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-cni-bin\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201502 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-k8s-cni-cncf-io\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201576 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-system-cni-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201607 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-socket-dir-parent\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201533 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1f810067-9720-4365-8d1b-8831300d10ae-hosts-file\") pod \"node-resolver-z5qkd\" (UID: \"1f810067-9720-4365-8d1b-8831300d10ae\") " pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201358 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-cni-multus\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201559 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-cnibin\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1f810067-9720-4365-8d1b-8831300d10ae-hosts-file\") pod \"node-resolver-z5qkd\" (UID: \"1f810067-9720-4365-8d1b-8831300d10ae\") " pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201693 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-system-cni-dir\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201732 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-mcd-auth-proxy-config\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201822 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-netns\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201849 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-daemon-config\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201870 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtgvb\" (UniqueName: \"kubernetes.io/projected/55244610-cf2e-4b72-b8b7-9d55898fbb62-kube-api-access-xtgvb\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201892 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrjj4\" (UniqueName: \"kubernetes.io/projected/84dca4a4-85d4-442f-a34d-d12df5252a65-kube-api-access-mrjj4\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201912 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-conf-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201931 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-cni-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201969 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.201989 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-os-release\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202011 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-cnibin\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202036 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-rootfs\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202062 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-etc-kubernetes\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202075 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-conf-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202092 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkrdl\" (UniqueName: \"kubernetes.io/projected/1f810067-9720-4365-8d1b-8831300d10ae-kube-api-access-vkrdl\") pod \"node-resolver-z5qkd\" (UID: \"1f810067-9720-4365-8d1b-8831300d10ae\") " pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202136 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-cni-dir\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-rootfs\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202207 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-kubelet\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202251 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-cnibin\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202255 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdps\" (UniqueName: \"kubernetes.io/projected/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-kube-api-access-sfdps\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202064 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-os-release\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202022 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-run-netns\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202297 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-os-release\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202311 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84dca4a4-85d4-442f-a34d-d12df5252a65-cni-binary-copy\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202359 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-host-var-lib-kubelet\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202369 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-mcd-auth-proxy-config\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202431 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55244610-cf2e-4b72-b8b7-9d55898fbb62-etc-kubernetes\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202482 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84dca4a4-85d4-442f-a34d-d12df5252a65-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.202791 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84dca4a4-85d4-442f-a34d-d12df5252a65-tuning-conf-dir\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.203174 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55244610-cf2e-4b72-b8b7-9d55898fbb62-cni-binary-copy\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.203209 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55244610-cf2e-4b72-b8b7-9d55898fbb62-multus-daemon-config\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.213049 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-proxy-tls\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.214331 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.214470 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.214840 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.214942 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.215181 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.215300 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.218327 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.224227 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtgvb\" (UniqueName: \"kubernetes.io/projected/55244610-cf2e-4b72-b8b7-9d55898fbb62-kube-api-access-xtgvb\") pod \"multus-pp2q2\" (UID: \"55244610-cf2e-4b72-b8b7-9d55898fbb62\") " pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.225125 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrjj4\" (UniqueName: \"kubernetes.io/projected/84dca4a4-85d4-442f-a34d-d12df5252a65-kube-api-access-mrjj4\") pod \"multus-additional-cni-plugins-tpcwn\" (UID: \"84dca4a4-85d4-442f-a34d-d12df5252a65\") " pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.228873 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdps\" (UniqueName: \"kubernetes.io/projected/5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8-kube-api-access-sfdps\") pod \"machine-config-daemon-wmjxr\" (UID: \"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\") " pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.229536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkrdl\" (UniqueName: \"kubernetes.io/projected/1f810067-9720-4365-8d1b-8831300d10ae-kube-api-access-vkrdl\") pod \"node-resolver-z5qkd\" (UID: \"1f810067-9720-4365-8d1b-8831300d10ae\") " pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.236403 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.241583 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 19:13:42 +0000 UTC, rotation deadline is 2027-01-10 17:12:06.278475497 +0000 UTC Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.241635 4754 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7821h53m23.036847117s for next certificate rotation Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.251727 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.276909 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.299117 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.312767 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.331378 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.344861 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.351764 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-z5qkd" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.360544 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pp2q2" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.366003 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.367420 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.371997 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:18:43 crc kubenswrapper[4754]: W0218 19:18:43.385403 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55244610_cf2e_4b72_b8b7_9d55898fbb62.slice/crio-adad574d81f384aa41d1ef0c4c3b3e87144430bdb73c00116d7d4d4c31af4a3d WatchSource:0}: Error finding container adad574d81f384aa41d1ef0c4c3b3e87144430bdb73c00116d7d4d4c31af4a3d: Status 404 returned error can't find the container with id adad574d81f384aa41d1ef0c4c3b3e87144430bdb73c00116d7d4d4c31af4a3d Feb 18 19:18:43 crc kubenswrapper[4754]: W0218 19:18:43.387842 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84dca4a4_85d4_442f_a34d_d12df5252a65.slice/crio-aeb2cf5800cddebe7286eac721c41707296ed72f4151976aa084751a63afb99d WatchSource:0}: Error finding container aeb2cf5800cddebe7286eac721c41707296ed72f4151976aa084751a63afb99d: Status 404 returned error can't find the container with id aeb2cf5800cddebe7286eac721c41707296ed72f4151976aa084751a63afb99d Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.391614 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.392116 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z5qkd" event={"ID":"1f810067-9720-4365-8d1b-8831300d10ae","Type":"ContainerStarted","Data":"40a5b72c921c6e4e2d762bdd18deaa628591793edf262b384a1eed7de57ce097"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.414569 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.420894 4754 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.424405 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.424449 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.424459 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.424589 4754 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.428218 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glx55"] Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.429277 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.432570 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.432862 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.433040 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.433681 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.433843 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.433948 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.435239 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.440666 4754 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.440988 4754 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.442369 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.442402 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.442412 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.442454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.442466 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.453330 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.465555 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.470007 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.471312 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.471358 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.471373 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.471395 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.471408 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.483256 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.488022 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.492907 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.492967 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.492977 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.493000 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.493014 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.497954 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-script-lib\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-etc-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505116 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-node-log\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505133 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-netd\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505168 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-systemd-units\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505184 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-log-socket\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505225 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-var-lib-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505240 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-config\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovn-node-metrics-cert\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505273 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-systemd\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505306 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505322 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-env-overrides\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505339 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-slash\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505357 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-ovn\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505375 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-ovn-kubernetes\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505395 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpsvw\" (UniqueName: \"kubernetes.io/projected/82e5683f-ada7-4578-a6e3-6f0dd72dd149-kube-api-access-rpsvw\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505417 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-netns\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505434 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-bin\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505452 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.505473 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-kubelet\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.506016 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.510808 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.511970 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.512000 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.512011 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.512032 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.512047 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.525091 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.527131 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.531060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.531109 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.531123 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.531161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.531183 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.537792 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.542612 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.542876 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.544682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.544749 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.544764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.544786 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.544806 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.553584 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.571780 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.585829 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.596261 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606286 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606337 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-env-overrides\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606375 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-slash\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606402 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-ovn\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606423 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-ovn-kubernetes\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpsvw\" (UniqueName: \"kubernetes.io/projected/82e5683f-ada7-4578-a6e3-6f0dd72dd149-kube-api-access-rpsvw\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-bin\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606523 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-netns\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606543 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-kubelet\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606564 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-etc-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606585 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-script-lib\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-netd\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-node-log\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606673 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-systemd-units\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606699 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-log-socket\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606731 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-var-lib-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606749 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-config\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606774 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovn-node-metrics-cert\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606808 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-systemd\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606884 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-systemd\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606913 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606947 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-bin\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.606984 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607012 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-netns\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-kubelet\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607063 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-etc-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607520 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-env-overrides\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607570 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-slash\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607599 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-ovn\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607624 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-ovn-kubernetes\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607651 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-log-socket\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607674 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-netd\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607697 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-node-log\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607721 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-systemd-units\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.607785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-script-lib\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.608614 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-var-lib-openvswitch\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.608803 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-config\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.609946 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.612015 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovn-node-metrics-cert\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.624094 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.624762 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpsvw\" (UniqueName: \"kubernetes.io/projected/82e5683f-ada7-4578-a6e3-6f0dd72dd149-kube-api-access-rpsvw\") pod \"ovnkube-node-glx55\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.649774 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.650118 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.650127 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.650155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.650166 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.739934 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.740652 4754 scope.go:117] "RemoveContainer" containerID="92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459" Feb 18 19:18:43 crc kubenswrapper[4754]: E0218 19:18:43.740814 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.750921 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.753605 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.753681 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.753700 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.753760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.753777 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: W0218 19:18:43.807703 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82e5683f_ada7_4578_a6e3_6f0dd72dd149.slice/crio-5274bd996f203fc6c66de41bb98371f580b753a65d6bb819bf202865f4f96db6 WatchSource:0}: Error finding container 5274bd996f203fc6c66de41bb98371f580b753a65d6bb819bf202865f4f96db6: Status 404 returned error can't find the container with id 5274bd996f203fc6c66de41bb98371f580b753a65d6bb819bf202865f4f96db6 Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.857297 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.857341 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.857354 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.857374 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.857389 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.961051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.961103 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.961114 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.961160 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4754]: I0218 19:18:43.961173 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.064390 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.064452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.064463 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.064485 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.064500 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.162331 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:19:26.908840352 +0000 UTC Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.167388 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.167451 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.167468 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.167489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.167503 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.270617 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.270671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.270684 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.270701 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.270714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.374705 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.374791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.374817 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.374847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.374865 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.398645 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" exitCode=0 Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.398750 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.398829 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"5274bd996f203fc6c66de41bb98371f580b753a65d6bb819bf202865f4f96db6"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.401304 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.401368 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.401384 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"beb97e39915c5702b2f543c7adfac559891ac7971c4a6bfd602d8c2265318051"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.404422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-z5qkd" event={"ID":"1f810067-9720-4365-8d1b-8831300d10ae","Type":"ContainerStarted","Data":"741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.407199 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerStarted","Data":"d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.407256 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerStarted","Data":"aeb2cf5800cddebe7286eac721c41707296ed72f4151976aa084751a63afb99d"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.409400 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerStarted","Data":"a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.409472 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerStarted","Data":"adad574d81f384aa41d1ef0c4c3b3e87144430bdb73c00116d7d4d4c31af4a3d"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.437229 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.463709 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.477779 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.477841 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.477853 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.477874 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.477887 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.499521 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.513497 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.530621 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.553159 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.568198 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.581349 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.581407 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.581417 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.581435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.581446 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.584341 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.599237 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.620088 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.634023 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.646682 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.666353 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.678829 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.684510 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.684546 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.684560 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.684580 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.684593 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.691720 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.710555 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.726006 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.741987 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.756298 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.769362 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.783247 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.786760 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.786799 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.786809 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.786826 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.786836 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.797746 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.819421 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.819704 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.819852 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:52.819828921 +0000 UTC m=+35.270241717 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.832561 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.842722 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.856995 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.889864 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.889935 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.889950 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.889977 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.889994 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.920620 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.920693 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.920735 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.920764 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.920871 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.920930 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.920955 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:52.920934842 +0000 UTC m=+35.371347638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921045 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:52.921020995 +0000 UTC m=+35.371434011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.920881 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921084 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921102 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921155 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:52.921129988 +0000 UTC m=+35.371543024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.920881 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921182 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921193 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:44 crc kubenswrapper[4754]: E0218 19:18:44.921229 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:52.9212178 +0000 UTC m=+35.371630796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.992110 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.992155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.992163 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.992175 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4754]: I0218 19:18:44.992186 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.094335 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.094396 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.094405 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.094421 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.094430 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.163418 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:19:52.190988791 +0000 UTC Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.197104 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.197168 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.197183 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.197200 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.197216 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.209589 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.209674 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.209722 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:45 crc kubenswrapper[4754]: E0218 19:18:45.209735 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:45 crc kubenswrapper[4754]: E0218 19:18:45.209839 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:45 crc kubenswrapper[4754]: E0218 19:18:45.210018 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.300820 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.301172 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.301279 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.301403 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.301492 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.404387 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.404619 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.404706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.404780 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.404851 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.412736 4754 generic.go:334] "Generic (PLEG): container finished" podID="84dca4a4-85d4-442f-a34d-d12df5252a65" containerID="d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51" exitCode=0 Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.412883 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerDied","Data":"d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.428305 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.440791 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.460846 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.472301 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.483753 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.496449 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.508991 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.509031 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.509042 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.509059 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.509074 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.514215 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.526292 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.537717 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.558076 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.572516 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.583066 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.596844 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.612309 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.612390 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.612429 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.612448 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.612461 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.615468 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.628234 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.640667 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.653111 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.663500 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.673792 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.693791 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.707918 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.714728 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.714779 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.714792 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.714813 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.714828 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.723208 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.736557 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.747867 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.758631 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.775100 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.817197 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.817233 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.817241 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.817256 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.817265 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.919656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.919747 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.919778 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.919809 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4754]: I0218 19:18:45.919835 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.022035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.022070 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.022079 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.022094 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.022104 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.124953 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.125036 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.125060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.125090 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.125108 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.164493 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 15:38:30.351784665 +0000 UTC Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.228444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.228509 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.228528 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.228559 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.228582 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.330453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.330495 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.330505 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.330520 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.330530 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.417645 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerStarted","Data":"3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.419499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.419544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.419553 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.428258 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.432674 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.432714 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.432726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.432744 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.432755 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.442886 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.456260 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.468818 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.481877 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.496863 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.512179 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.525081 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.535228 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.535278 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.535289 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.535306 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.535316 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.537539 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.550675 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.564723 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.590434 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.610885 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.638265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.638337 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.638359 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.638396 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.638420 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.741235 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.741278 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.741290 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.741308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.741321 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.844026 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.844086 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.844100 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.844129 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.844162 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.946565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.946598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.946606 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.946618 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4754]: I0218 19:18:46.946629 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.049383 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.049427 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.049438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.049455 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.049465 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.152088 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.152192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.152206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.152223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.152233 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.165009 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:32:08.843532012 +0000 UTC Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.209411 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:47 crc kubenswrapper[4754]: E0218 19:18:47.209555 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.209652 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.209806 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:47 crc kubenswrapper[4754]: E0218 19:18:47.209860 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:47 crc kubenswrapper[4754]: E0218 19:18:47.209887 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.274011 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.274065 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.274077 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.274097 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.274114 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.376962 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.377567 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.377580 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.377598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.377614 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.432987 4754 generic.go:334] "Generic (PLEG): container finished" podID="84dca4a4-85d4-442f-a34d-d12df5252a65" containerID="3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b" exitCode=0 Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.433341 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerDied","Data":"3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.438000 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.438038 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.438050 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.449499 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.465762 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.479825 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.481608 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.481643 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.481651 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.481665 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.481675 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.494433 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.506348 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.517520 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.526934 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.542419 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.554214 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.569332 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.583627 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.584515 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.584588 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.584598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.584613 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.584622 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.596621 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.608288 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.636597 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gpz55"] Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.636983 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.638504 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.638744 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.638862 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.639573 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.651964 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.678956 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.693423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.693470 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.693479 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.693494 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.693504 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.697309 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.715985 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.738964 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.750603 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35524782-f487-48c5-ae76-a9065bb810c9-serviceca\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.750644 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtck\" (UniqueName: \"kubernetes.io/projected/35524782-f487-48c5-ae76-a9065bb810c9-kube-api-access-4jtck\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.750671 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35524782-f487-48c5-ae76-a9065bb810c9-host\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.760232 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.771268 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.784667 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.796281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.796312 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.796322 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.796337 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.796350 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.797259 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.812417 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.824558 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.834353 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.847314 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.851404 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35524782-f487-48c5-ae76-a9065bb810c9-serviceca\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.851447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtck\" (UniqueName: \"kubernetes.io/projected/35524782-f487-48c5-ae76-a9065bb810c9-kube-api-access-4jtck\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.851488 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35524782-f487-48c5-ae76-a9065bb810c9-host\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.851591 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/35524782-f487-48c5-ae76-a9065bb810c9-host\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.852888 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/35524782-f487-48c5-ae76-a9065bb810c9-serviceca\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.863475 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.881734 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtck\" (UniqueName: \"kubernetes.io/projected/35524782-f487-48c5-ae76-a9065bb810c9-kube-api-access-4jtck\") pod \"node-ca-gpz55\" (UID: \"35524782-f487-48c5-ae76-a9065bb810c9\") " pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.898750 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.898969 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.899046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.899112 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.899181 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.954807 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gpz55" Feb 18 19:18:47 crc kubenswrapper[4754]: I0218 19:18:47.995631 4754 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 19:18:47 crc kubenswrapper[4754]: W0218 19:18:47.995964 4754 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:18:47 crc kubenswrapper[4754]: W0218 19:18:47.999134 4754 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:18:47 crc kubenswrapper[4754]: W0218 19:18:47.999830 4754 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:18:48 crc kubenswrapper[4754]: W0218 19:18:48.000964 4754 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.015446 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.015496 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.015507 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.015525 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.015538 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.117809 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.117854 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.117866 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.117882 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.117895 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.165928 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:34:00.72127356 +0000 UTC Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.223412 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.223458 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.223471 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.223493 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.223506 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.232303 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.247370 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.261129 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.282609 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.299371 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.316408 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.326261 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.326333 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.326354 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.326386 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.326420 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.331796 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.350287 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.369553 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.383521 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.398652 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.414329 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.428656 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.429031 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.429050 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.429058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.429072 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.429084 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.448069 4754 generic.go:334] "Generic (PLEG): container finished" podID="84dca4a4-85d4-442f-a34d-d12df5252a65" containerID="d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371" exitCode=0 Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.448169 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerDied","Data":"d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.449041 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.450058 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gpz55" event={"ID":"35524782-f487-48c5-ae76-a9065bb810c9","Type":"ContainerStarted","Data":"6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.450118 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gpz55" event={"ID":"35524782-f487-48c5-ae76-a9065bb810c9","Type":"ContainerStarted","Data":"ae791d69580d4c2d5652bfcdba8da7b512949d7b1456b70c45fb735c22320912"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.466794 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.486520 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.500432 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.518353 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.530504 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.534953 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.534983 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.534995 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.535014 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.535027 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.542576 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.556786 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.572232 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.587035 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.601117 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.613790 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.633872 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.637636 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.637676 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.637685 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.637702 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.637712 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.644251 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.656208 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.673416 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.689127 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.705350 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.717276 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.731028 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.740408 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.740465 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.740482 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.740501 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.740512 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.748134 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.758673 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.776046 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.788786 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.802661 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.819161 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.839524 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.845456 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.845523 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.845541 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.845566 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.845581 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.854699 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.868717 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.942673 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.948884 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.948917 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.948926 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.948941 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4754]: I0218 19:18:48.948954 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.053484 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.054116 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.054177 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.054208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.054235 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.157478 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.157568 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.157584 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.157602 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.157615 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.165766 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.166891 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:44:08.618870925 +0000 UTC Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.209675 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.209733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.209808 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:49 crc kubenswrapper[4754]: E0218 19:18:49.209938 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:49 crc kubenswrapper[4754]: E0218 19:18:49.210028 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:49 crc kubenswrapper[4754]: E0218 19:18:49.210109 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.261101 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.261169 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.261181 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.261199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.261212 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.344830 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.364053 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.364095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.364109 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.364127 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.364171 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.455602 4754 generic.go:334] "Generic (PLEG): container finished" podID="84dca4a4-85d4-442f-a34d-d12df5252a65" containerID="731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e" exitCode=0 Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.455651 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerDied","Data":"731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.466064 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.466100 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.466109 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.466123 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.466133 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.470517 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.487849 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.491498 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.509560 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.527275 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.543989 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.562289 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.569032 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.569057 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.569066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.569079 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.569090 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.572638 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.584246 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.596305 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.609251 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.622444 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.632592 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.644701 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.654730 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.671288 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.671324 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.671335 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.671351 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.671362 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.773594 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.773635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.773650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.773666 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.773676 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.876482 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.876716 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.876797 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.876873 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.876939 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.979513 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.979556 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.979566 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.979581 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4754]: I0218 19:18:49.979593 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.083037 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.083073 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.083085 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.083100 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.083111 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.167747 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:26:35.96851909 +0000 UTC Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.185764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.185799 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.185808 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.185822 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.185832 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.288822 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.289087 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.289262 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.289400 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.289507 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.392226 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.392270 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.392285 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.392308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.392323 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.464455 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.467224 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerStarted","Data":"f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.483723 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.495330 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.496624 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.496665 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.496681 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.496707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.496721 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.509555 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.524118 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.534511 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.549085 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.558916 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.572896 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.588324 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.599364 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.599426 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.599440 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.599461 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.599474 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.611402 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.638319 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.652873 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.668295 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.686992 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.702658 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.702856 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.702925 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.702990 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.703055 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.806650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.806698 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.806711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.806730 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.806744 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.908937 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.908982 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.908991 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.909007 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4754]: I0218 19:18:50.909018 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.012108 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.012180 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.012199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.012222 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.012236 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.114634 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.114671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.114679 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.114693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.114702 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.168592 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:36:31.961131454 +0000 UTC Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.209256 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:51 crc kubenswrapper[4754]: E0218 19:18:51.209440 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.209570 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:51 crc kubenswrapper[4754]: E0218 19:18:51.209706 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.209559 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:51 crc kubenswrapper[4754]: E0218 19:18:51.209809 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.217641 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.217994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.218102 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.218254 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.218351 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.320682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.320713 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.320722 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.320737 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.320747 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.424275 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.424323 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.424334 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.424352 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.424369 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.476488 4754 generic.go:334] "Generic (PLEG): container finished" podID="84dca4a4-85d4-442f-a34d-d12df5252a65" containerID="f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583" exitCode=0 Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.476543 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerDied","Data":"f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.490415 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.506821 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.524056 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.526436 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.526706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.526847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.526928 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.526990 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.540345 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.560654 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.575843 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.588626 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.601799 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.619300 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.633173 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.633227 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.633237 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.633255 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.633267 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.640158 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.653811 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.670835 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.682010 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.693742 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.736650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.737269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.737542 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.737577 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.737597 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.840845 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.840920 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.840948 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.840983 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.841005 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.944720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.944785 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.944802 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.944829 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4754]: I0218 19:18:51.944849 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.047919 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.047981 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.047995 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.048016 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.048034 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.151793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.151841 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.151859 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.151883 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.151902 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.169658 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:57:23.893568797 +0000 UTC Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.255654 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.255715 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.255728 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.255748 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.255763 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.358392 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.358441 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.358454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.358475 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.358489 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.462925 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.462981 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.462994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.463019 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.463033 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.486969 4754 generic.go:334] "Generic (PLEG): container finished" podID="84dca4a4-85d4-442f-a34d-d12df5252a65" containerID="12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3" exitCode=0 Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.487095 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerDied","Data":"12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.493902 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.494892 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.494958 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.503886 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.528238 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.545293 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.558647 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.560239 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.560962 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.565766 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.565842 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.565858 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.565873 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.565885 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.571150 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.584135 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.599561 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.612368 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.626150 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.638439 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.655677 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.667636 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.667666 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.667676 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.667694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.667708 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.672093 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.681619 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.696792 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.710688 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.724986 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.743026 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.755729 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.767576 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.770700 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.770739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.770749 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.770764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.770774 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.779463 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.796793 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.809413 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.828544 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.849041 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.873162 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.873332 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.873438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.873515 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.873572 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.882378 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.897975 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.903389 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:52 crc kubenswrapper[4754]: E0218 19:18:52.903631 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:08.90361255 +0000 UTC m=+51.354025336 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.915811 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.930311 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.975914 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.975957 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.975969 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.975984 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4754]: I0218 19:18:52.975992 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.005030 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.005369 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.005396 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005203 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.005420 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005483 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:09.005464763 +0000 UTC m=+51.455877559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005533 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005551 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005562 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005565 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005576 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005586 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005605 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:09.005594547 +0000 UTC m=+51.456007343 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005621 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:09.005613818 +0000 UTC m=+51.456026614 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005687 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.005726 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:09.0057171 +0000 UTC m=+51.456129896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.078313 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.078344 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.078353 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.078365 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.078374 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.169819 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:12:18.626624764 +0000 UTC Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.180380 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.180414 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.180426 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.180444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.180456 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.208821 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.208929 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.209230 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.209285 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.209324 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.209362 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.283125 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.283196 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.283207 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.283223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.283257 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.386614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.386673 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.386683 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.386700 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.386712 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.489159 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.489203 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.489213 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.489229 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.489239 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.498992 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.499793 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" event={"ID":"84dca4a4-85d4-442f-a34d-d12df5252a65","Type":"ContainerStarted","Data":"cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.517077 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.529579 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.549126 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.563816 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.592288 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.592326 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.592336 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.592355 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.592368 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.593088 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.605503 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.619088 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.634741 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.647060 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.659501 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.701694 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.703473 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.703501 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.703512 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.703527 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.703539 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.725692 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.742606 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.749788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.750098 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.750217 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.750320 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.750410 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.762216 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.766640 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.771308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.771345 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.771355 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.771374 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.771385 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.785635 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.790749 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.790787 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.790798 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.790814 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.790825 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.806195 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.810978 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.811048 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.811063 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.811087 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.811100 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.825454 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.829917 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.829992 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.830011 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.830048 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.830072 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.843401 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:53 crc kubenswrapper[4754]: E0218 19:18:53.843531 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.845694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.845754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.845768 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.845787 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.845797 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.948608 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.948674 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.948686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.948706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4754]: I0218 19:18:53.948718 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.052465 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.052512 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.052525 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.052544 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.052557 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.155363 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.155415 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.155429 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.155450 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.155469 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.170913 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:57:33.932709375 +0000 UTC Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.258369 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.258424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.258435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.258453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.258468 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.362116 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.362246 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.362275 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.362312 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.362338 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.465286 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.465713 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.465865 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.465993 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.466107 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.502448 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.570186 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.570249 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.570265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.570291 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.570308 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.673218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.673272 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.673285 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.673308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.673323 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.776759 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.776811 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.776832 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.776855 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4754]: I0218 19:18:54.776872 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.036101 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.036547 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.036663 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.036755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.036836 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.139994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.140326 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.140472 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.140620 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.140761 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.171670 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:41:55.82995024 +0000 UTC Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.209261 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.209294 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:55 crc kubenswrapper[4754]: E0218 19:18:55.209787 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:55 crc kubenswrapper[4754]: E0218 19:18:55.209843 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.209316 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:55 crc kubenswrapper[4754]: E0218 19:18:55.210110 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.243170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.243223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.243240 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.243265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.243287 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.352106 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.353046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.353190 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.353298 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.353403 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.456912 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.456964 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.456974 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.456992 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.457004 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.497805 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf"] Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.498749 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.501882 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.502196 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.518716 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.537698 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.538625 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b8e7ce0-bf49-4935-bf1f-44df60660b11-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.538717 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b8e7ce0-bf49-4935-bf1f-44df60660b11-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.538746 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m8rf\" (UniqueName: \"kubernetes.io/projected/5b8e7ce0-bf49-4935-bf1f-44df60660b11-kube-api-access-9m8rf\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.538770 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b8e7ce0-bf49-4935-bf1f-44df60660b11-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.552407 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.560182 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.560238 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.560249 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.560267 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.560281 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.568838 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.580677 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.601238 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.612685 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.629016 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.639759 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b8e7ce0-bf49-4935-bf1f-44df60660b11-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.639875 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b8e7ce0-bf49-4935-bf1f-44df60660b11-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.639914 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m8rf\" (UniqueName: \"kubernetes.io/projected/5b8e7ce0-bf49-4935-bf1f-44df60660b11-kube-api-access-9m8rf\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.639949 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b8e7ce0-bf49-4935-bf1f-44df60660b11-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.641121 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b8e7ce0-bf49-4935-bf1f-44df60660b11-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.641267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b8e7ce0-bf49-4935-bf1f-44df60660b11-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.641921 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.648792 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b8e7ce0-bf49-4935-bf1f-44df60660b11-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.659263 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m8rf\" (UniqueName: \"kubernetes.io/projected/5b8e7ce0-bf49-4935-bf1f-44df60660b11-kube-api-access-9m8rf\") pod \"ovnkube-control-plane-749d76644c-lzrmf\" (UID: \"5b8e7ce0-bf49-4935-bf1f-44df60660b11\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.661418 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.663979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.664042 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.664062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.664090 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.664109 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.680194 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.695551 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.712826 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.728267 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.742198 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.767160 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.767212 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.767235 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.767264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.767283 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.818583 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.873637 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.873677 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.873688 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.873703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.873714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.976807 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.976852 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.976868 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.976890 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4754]: I0218 19:18:55.976903 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.079651 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.079677 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.079687 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.079701 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.079714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.172014 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:20:23.651291519 +0000 UTC Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.182314 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.182359 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.182372 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.182389 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.182401 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.284496 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.284536 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.284547 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.284563 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.284575 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.387753 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.387791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.387801 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.387817 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.387827 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.490095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.490155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.490170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.490184 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.490194 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.509178 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" event={"ID":"5b8e7ce0-bf49-4935-bf1f-44df60660b11","Type":"ContainerStarted","Data":"66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.509218 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" event={"ID":"5b8e7ce0-bf49-4935-bf1f-44df60660b11","Type":"ContainerStarted","Data":"fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.509227 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" event={"ID":"5b8e7ce0-bf49-4935-bf1f-44df60660b11","Type":"ContainerStarted","Data":"a3e8bd5764b384d960242b00c451e1f518afab7e02484abb08202c5b2c530e94"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.511430 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/0.log" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.513809 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7" exitCode=1 Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.513847 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.514756 4754 scope.go:117] "RemoveContainer" containerID="81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.532457 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.547945 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.561970 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.581790 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.592882 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.592973 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.592987 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.593008 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.593021 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.599557 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.613204 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.634371 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.652044 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.672125 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.685387 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.698989 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.699029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.699038 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.699052 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.699064 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.700161 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.718235 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.735950 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.752209 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.767958 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.780630 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.789472 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.802178 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.802236 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.802257 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.802281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.802299 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.807041 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.821449 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.838415 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.853685 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.869242 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.880843 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.892716 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.905544 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.905607 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.905620 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.905642 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.905655 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.916812 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"8:56.065499 6033 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:56.065534 6033 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:56.065573 6033 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.065588 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:56.065600 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:56.065610 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:56.065620 6033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:56.065631 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:56.065644 6033 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:56.065704 6033 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.066085 6033 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:56.066994 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:56.067033 6033 factory.go:656] Stopping watch factory\\\\nI0218 19:18:56.067056 6033 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.945537 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.971240 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.988085 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qztvz"] Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.988631 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:56 crc kubenswrapper[4754]: E0218 19:18:56.988704 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:18:56 crc kubenswrapper[4754]: I0218 19:18:56.990746 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.003934 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.007767 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.007989 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.008068 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.008133 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.008236 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.019545 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.034897 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.050962 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.053378 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.053477 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj67g\" (UniqueName: \"kubernetes.io/projected/539505bb-b2d2-4adc-be1e-a95f73778a52-kube-api-access-kj67g\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.068852 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.084782 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.098132 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.110295 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.110357 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.110373 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.110393 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.110404 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.123709 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"8:56.065499 6033 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:56.065534 6033 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:56.065573 6033 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.065588 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:56.065600 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:56.065610 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:56.065620 6033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:56.065631 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:56.065644 6033 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:56.065704 6033 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.066085 6033 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:56.066994 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:56.067033 6033 factory.go:656] Stopping watch factory\\\\nI0218 19:18:56.067056 6033 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.134986 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.147058 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.155257 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.155391 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj67g\" (UniqueName: \"kubernetes.io/projected/539505bb-b2d2-4adc-be1e-a95f73778a52-kube-api-access-kj67g\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.155659 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.155882 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:57.655852607 +0000 UTC m=+40.106265403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.163467 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.172405 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:32:14.116703938 +0000 UTC Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.173228 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj67g\" (UniqueName: \"kubernetes.io/projected/539505bb-b2d2-4adc-be1e-a95f73778a52-kube-api-access-kj67g\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.178707 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.191302 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.201199 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.208799 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.208799 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.208952 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.208921 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.209078 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.209258 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.212544 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.212600 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.212617 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.212638 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.212655 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.218514 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.231990 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.293594 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.306585 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.316026 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.316112 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.316128 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.316168 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.316186 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.418585 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.418649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.418661 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.418685 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.418701 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.520516 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.520571 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.520584 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.520604 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.520618 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.521355 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/1.log" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.522166 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/0.log" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.526940 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8" exitCode=1 Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.526982 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.527068 4754 scope.go:117] "RemoveContainer" containerID="81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.527944 4754 scope.go:117] "RemoveContainer" containerID="294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8" Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.528249 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.549898 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.563465 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.589280 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.607662 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.624374 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.624456 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.624479 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.624508 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.624555 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.625011 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.649475 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.668419 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.685550 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.697068 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.697394 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:57 crc kubenswrapper[4754]: E0218 19:18:57.697581 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:58.697537372 +0000 UTC m=+41.147950198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.703834 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.724777 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.728185 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.728257 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.728279 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.728308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.728330 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.742548 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.761115 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.778615 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.827392 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"8:56.065499 6033 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:56.065534 6033 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:56.065573 6033 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.065588 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:56.065600 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:56.065610 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:56.065620 6033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:56.065631 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:56.065644 6033 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:56.065704 6033 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.066085 6033 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:56.066994 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:56.067033 6033 factory.go:656] Stopping watch factory\\\\nI0218 19:18:56.067056 6033 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.831645 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.831698 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.831720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.831746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.831763 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.846189 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.869513 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.934808 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.934876 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.934888 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.934909 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4754]: I0218 19:18:57.934922 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.038108 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.038201 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.038219 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.038244 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.038261 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.142046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.142107 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.142118 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.142154 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.142168 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.173459 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:39:18.002234298 +0000 UTC Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.210332 4754 scope.go:117] "RemoveContainer" containerID="92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.233304 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.247410 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.247453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.247464 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.247487 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.247506 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.247796 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.259884 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.279595 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.292188 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.305864 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.320714 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.340640 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"8:56.065499 6033 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:56.065534 6033 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:56.065573 6033 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.065588 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:56.065600 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:56.065610 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:56.065620 6033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:56.065631 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:56.065644 6033 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:56.065704 6033 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.066085 6033 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:56.066994 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:56.067033 6033 factory.go:656] Stopping watch factory\\\\nI0218 19:18:56.067056 6033 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.352564 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.353031 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.353056 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.353065 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.353080 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.353089 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.364648 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.378612 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.390845 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.406906 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.425345 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.439102 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.454335 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.456423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.456462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.456480 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.456501 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.456517 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.534792 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.537216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.537861 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.539384 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/1.log" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.559171 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.559718 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.559783 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.559800 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.559825 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.559843 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.588058 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d9180bcb726fbece8efaf3e9aad2c0012085ccbed8d356ce3304fee59d6ae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"8:56.065499 6033 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:56.065534 6033 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:56.065573 6033 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.065588 6033 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:56.065600 6033 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:56.065610 6033 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:56.065620 6033 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:56.065631 6033 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:56.065644 6033 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:56.065704 6033 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:56.066085 6033 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:56.066994 6033 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:56.067033 6033 factory.go:656] Stopping watch factory\\\\nI0218 19:18:56.067056 6033 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.602100 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.618231 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.633309 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.647980 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.663429 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.664115 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.664199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.664221 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.664249 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.664267 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.681947 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.694659 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.709561 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.711944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:58 crc kubenswrapper[4754]: E0218 19:18:58.712103 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:58 crc kubenswrapper[4754]: E0218 19:18:58.712212 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:00.712185724 +0000 UTC m=+43.162598530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.724874 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.739594 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.751122 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.767109 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.769980 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.770020 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.770033 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.770055 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.770068 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.780188 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.790793 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:58Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.872919 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.872963 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.872978 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.872996 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.873010 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.977354 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.977427 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.977454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.977484 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4754]: I0218 19:18:58.977505 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.079740 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.079785 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.079795 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.079836 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.079849 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.174214 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:28:06.757766992 +0000 UTC Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.182029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.182068 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.182080 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.182097 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.182109 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.209342 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.209485 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:59 crc kubenswrapper[4754]: E0218 19:18:59.209580 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.209598 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.209698 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:18:59 crc kubenswrapper[4754]: E0218 19:18:59.209845 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:59 crc kubenswrapper[4754]: E0218 19:18:59.210017 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:18:59 crc kubenswrapper[4754]: E0218 19:18:59.210068 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.284166 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.284211 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.284220 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.284238 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.284249 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.388071 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.388106 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.388117 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.388133 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.388161 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.490701 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.490738 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.490747 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.490762 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.490772 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.592938 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.593028 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.593050 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.593070 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.593082 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.695708 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.695754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.695764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.695784 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.695797 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.798304 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.798343 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.798352 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.798366 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.798375 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.900893 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.900930 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.900939 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.900954 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4754]: I0218 19:18:59.900965 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.004269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.004332 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.004351 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.004406 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.004424 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.109240 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.109318 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.109361 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.109396 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.109419 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.175263 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:56:39.881286408 +0000 UTC Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.212876 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.212937 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.212955 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.212979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.212995 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.315633 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.315668 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.315678 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.315696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.315707 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.418957 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.419001 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.419012 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.419029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.419041 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.521786 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.521826 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.521837 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.521852 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.521865 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.624585 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.624634 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.624647 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.624664 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.624677 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.727649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.727712 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.727731 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.727756 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.727774 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.734209 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:00 crc kubenswrapper[4754]: E0218 19:19:00.734383 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:00 crc kubenswrapper[4754]: E0218 19:19:00.734449 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:04.734430473 +0000 UTC m=+47.184843269 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.830631 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.830712 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.830729 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.830755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.830775 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.933329 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.933397 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.933415 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.933440 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:00 crc kubenswrapper[4754]: I0218 19:19:00.933458 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:00Z","lastTransitionTime":"2026-02-18T19:19:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.035752 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.035788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.035800 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.035816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.035828 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.138874 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.138942 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.138961 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.138983 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.139000 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.176015 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:43:20.581731551 +0000 UTC Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.209230 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.209270 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:01 crc kubenswrapper[4754]: E0218 19:19:01.209382 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.209331 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:01 crc kubenswrapper[4754]: E0218 19:19:01.209496 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.209331 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:01 crc kubenswrapper[4754]: E0218 19:19:01.209605 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:01 crc kubenswrapper[4754]: E0218 19:19:01.209666 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.242348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.242373 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.242381 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.242396 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.242405 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.344489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.344514 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.344521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.344534 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.344542 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.446046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.446059 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.446066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.446075 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.446082 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.548811 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.548879 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.548893 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.548918 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.548935 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.650989 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.651031 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.651044 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.651062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.651075 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.753977 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.754183 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.754194 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.754210 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.754220 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.856543 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.856582 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.856591 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.856606 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.856615 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.959173 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.959228 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.959244 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.959268 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:01 crc kubenswrapper[4754]: I0218 19:19:01.959282 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:01Z","lastTransitionTime":"2026-02-18T19:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.061458 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.061508 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.061520 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.061536 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.061551 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.164024 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.164067 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.164079 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.164094 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.164105 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.176460 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:13:20.423726611 +0000 UTC Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.266464 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.266498 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.266508 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.266521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.266532 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.369250 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.369292 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.369303 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.369318 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.369330 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.471689 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.471734 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.471743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.471758 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.471782 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.574785 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.574837 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.574849 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.574867 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.574879 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.678089 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.678428 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.678439 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.678454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.678463 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.780816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.780843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.780850 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.780862 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.780870 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.883093 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.883126 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.883154 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.883170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.883180 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.985080 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.985102 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.985109 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.985119 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:02 crc kubenswrapper[4754]: I0218 19:19:02.985128 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:02Z","lastTransitionTime":"2026-02-18T19:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.087656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.087687 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.087696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.087711 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.087722 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.190632 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.190663 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.190672 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.190686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.190696 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.202847 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:42:32.791634999 +0000 UTC Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.209718 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.209780 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.209820 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.209865 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:03 crc kubenswrapper[4754]: E0218 19:19:03.209889 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:03 crc kubenswrapper[4754]: E0218 19:19:03.210033 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:03 crc kubenswrapper[4754]: E0218 19:19:03.210155 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:03 crc kubenswrapper[4754]: E0218 19:19:03.210232 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.293766 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.293821 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.293832 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.293887 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.293902 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.397391 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.397442 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.397453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.397472 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.397484 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.499896 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.499929 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.499939 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.499953 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.499963 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.602389 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.602450 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.602470 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.602537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.602563 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.704856 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.704939 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.704964 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.705000 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.705023 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.807888 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.807919 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.807927 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.807941 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.807949 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.910625 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.910714 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.910728 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.910745 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:03 crc kubenswrapper[4754]: I0218 19:19:03.910759 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:03Z","lastTransitionTime":"2026-02-18T19:19:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.013614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.013699 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.013718 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.013744 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.013763 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.110771 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.110850 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.110863 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.110881 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.110896 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.124678 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.128775 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.128876 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.128945 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.129009 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.129077 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.147284 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.151724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.151772 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.151785 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.151803 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.151821 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.172435 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.175726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.175756 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.175767 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.175783 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.175793 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.197032 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.202324 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.202381 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.202399 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.202424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.202441 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.203433 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:07:01.169303903 +0000 UTC Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.227477 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:04Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.227906 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.229839 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.229893 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.229906 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.229928 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.229941 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.333299 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.333355 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.333375 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.333398 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.333415 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.436575 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.436640 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.436658 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.436683 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.436700 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.540058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.540122 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.540176 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.540201 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.540220 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.643403 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.643457 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.643474 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.643499 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.643516 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.747542 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.747639 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.747657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.747691 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.747714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.817747 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.818137 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:04 crc kubenswrapper[4754]: E0218 19:19:04.818267 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:12.818244802 +0000 UTC m=+55.268657638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.850562 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.850613 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.850626 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.850650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.850663 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.953063 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.953117 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.953132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.953183 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:04 crc kubenswrapper[4754]: I0218 19:19:04.953196 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:04Z","lastTransitionTime":"2026-02-18T19:19:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.056172 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.056213 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.056222 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.056235 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.056243 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.159881 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.159965 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.159984 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.160016 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.160046 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.204298 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:39:51.37256829 +0000 UTC Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.209630 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.209740 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.209794 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.209745 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:05 crc kubenswrapper[4754]: E0218 19:19:05.209986 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:05 crc kubenswrapper[4754]: E0218 19:19:05.210111 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:05 crc kubenswrapper[4754]: E0218 19:19:05.210527 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:05 crc kubenswrapper[4754]: E0218 19:19:05.210685 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.262814 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.263190 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.263408 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.263581 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.263673 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.366879 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.366919 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.366928 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.366943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.366952 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.469840 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.470286 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.470421 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.470554 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.470716 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.573155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.573528 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.573645 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.573754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.573837 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.677366 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.677448 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.677594 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.677633 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.677655 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.781058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.781120 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.781132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.781182 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.781196 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.821548 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.822752 4754 scope.go:117] "RemoveContainer" containerID="294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8" Feb 18 19:19:05 crc kubenswrapper[4754]: E0218 19:19:05.822971 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.840224 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.855299 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.873280 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.884286 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.884331 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.884340 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.884356 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.884368 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.887708 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.902316 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.934519 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.945321 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.956327 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.967278 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.987174 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.987208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.987216 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.987230 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.987241 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:05Z","lastTransitionTime":"2026-02-18T19:19:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:05 crc kubenswrapper[4754]: I0218 19:19:05.988614 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:05Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.008286 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.021635 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.040655 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.054272 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.065745 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.078108 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:06Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.091091 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.091150 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.091161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.091178 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.091215 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.193547 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.193579 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.193589 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.193603 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.193611 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.204764 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:01:44.31278301 +0000 UTC Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.295819 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.295860 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.295871 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.295890 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.295903 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.399564 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.400001 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.400244 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.400457 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.400629 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.503903 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.504661 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.504870 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.505014 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.505223 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.608348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.608725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.608943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.609095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.609283 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.711943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.711990 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.712000 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.712023 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.712034 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.815080 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.815483 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.815634 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.815752 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.815858 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.918665 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.918706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.918714 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.918728 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:06 crc kubenswrapper[4754]: I0218 19:19:06.918737 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:06Z","lastTransitionTime":"2026-02-18T19:19:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.022361 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.022433 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.022454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.022483 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.022501 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.125717 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.125775 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.125788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.125807 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.125819 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.205875 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:56:37.351990482 +0000 UTC Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.209206 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.209291 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.209207 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:07 crc kubenswrapper[4754]: E0218 19:19:07.209343 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:07 crc kubenswrapper[4754]: E0218 19:19:07.209425 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.209287 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:07 crc kubenswrapper[4754]: E0218 19:19:07.209549 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:07 crc kubenswrapper[4754]: E0218 19:19:07.209652 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.227879 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.227956 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.227970 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.227991 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.228006 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.331358 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.331419 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.331437 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.331462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.331486 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.433992 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.434051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.434066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.434086 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.434101 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.536682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.536739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.536752 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.536802 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.536823 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.642739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.642793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.642803 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.642821 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.642833 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.746418 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.746509 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.746533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.746570 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.746596 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.849397 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.849488 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.849508 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.849540 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.849561 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.953424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.953474 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.953489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.953516 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:07 crc kubenswrapper[4754]: I0218 19:19:07.953531 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:07Z","lastTransitionTime":"2026-02-18T19:19:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.056872 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.056914 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.056925 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.056943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.056956 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.159703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.159741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.159753 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.159770 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.159779 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.206522 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:37:21.777844475 +0000 UTC Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.236397 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.259107 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.262491 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.262739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.262922 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.263110 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.263295 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.282577 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.300649 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.333253 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.351273 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.366657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.366687 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.366694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.366707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.366716 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.371578 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.392330 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.411003 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.441662 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.456544 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.469362 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.469429 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.469474 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.469495 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.469508 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.471814 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.489727 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.504966 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.517058 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.533279 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.572054 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.572098 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.572109 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.572124 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.572151 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.674875 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.675201 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.675323 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.675405 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.675471 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.778527 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.778849 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.779008 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.779178 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.779274 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.882602 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.882654 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.882670 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.882690 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.882705 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.959002 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:08 crc kubenswrapper[4754]: E0218 19:19:08.959254 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:40.959218463 +0000 UTC m=+83.409631269 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.986587 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.986657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.986675 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.986703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:08 crc kubenswrapper[4754]: I0218 19:19:08.986722 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:08Z","lastTransitionTime":"2026-02-18T19:19:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.060412 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.060504 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.060558 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.060601 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.060807 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.060836 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.060857 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.060939 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:41.060912532 +0000 UTC m=+83.511325358 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061004 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061124 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:41.061093697 +0000 UTC m=+83.511506693 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061264 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061297 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:41.061288662 +0000 UTC m=+83.511701698 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061031 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061335 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061352 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.061405 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:41.061394605 +0000 UTC m=+83.511807641 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.091308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.091416 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.091430 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.091453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.091470 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.195167 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.195210 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.195223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.195241 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.195258 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.207671 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:49:27.178060598 +0000 UTC Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.209067 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.209075 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.209120 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.209216 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.209255 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.209435 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.209645 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:09 crc kubenswrapper[4754]: E0218 19:19:09.209681 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.298323 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.298403 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.298418 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.298459 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.298473 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.400780 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.400824 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.400839 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.400857 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.400867 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.503758 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.503799 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.503810 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.503825 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.503836 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.606628 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.606667 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.606679 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.606698 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.606710 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.708723 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.708763 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.708775 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.708792 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.708803 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.812168 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.812212 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.812224 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.812241 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.812252 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.914918 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.914954 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.914962 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.914977 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:09 crc kubenswrapper[4754]: I0218 19:19:09.914987 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:09Z","lastTransitionTime":"2026-02-18T19:19:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.017007 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.017049 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.017060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.017077 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.017088 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.119788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.119843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.119858 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.119878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.119891 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.208421 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:12:09.652646953 +0000 UTC Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.221883 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.221922 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.221931 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.221944 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.221953 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.324384 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.324643 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.324727 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.324799 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.324862 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.427124 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.427177 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.427186 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.427201 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.427209 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.529435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.529466 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.529475 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.529488 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.529497 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.602605 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.614271 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.619539 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.632712 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.632770 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.632792 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.632820 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.632841 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.636192 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.657538 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.673799 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.689521 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.701728 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.715304 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.726600 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.735000 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.735038 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.735047 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.735060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.735070 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.739522 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.747577 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.756677 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.775234 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.787964 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.800278 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.811796 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.827500 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:10Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.837192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.837254 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.837267 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.837289 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.837301 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.939221 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.939256 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.939267 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.939289 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:10 crc kubenswrapper[4754]: I0218 19:19:10.939300 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:10Z","lastTransitionTime":"2026-02-18T19:19:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.041804 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.041845 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.041855 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.041874 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.041884 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.144841 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.144885 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.144895 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.144909 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.144954 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.208559 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:35:42.021217067 +0000 UTC Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.208748 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:11 crc kubenswrapper[4754]: E0218 19:19:11.208889 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.208749 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.208776 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:11 crc kubenswrapper[4754]: E0218 19:19:11.209087 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.208768 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:11 crc kubenswrapper[4754]: E0218 19:19:11.209202 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:11 crc kubenswrapper[4754]: E0218 19:19:11.209264 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.248003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.248050 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.248102 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.248123 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.248175 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.351537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.351599 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.351612 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.351631 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.351642 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.453979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.454028 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.454036 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.454052 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.454065 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.556782 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.556855 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.556865 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.556878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.556887 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.659522 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.659561 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.659573 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.659590 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.659601 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.720962 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.734833 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.746895 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.758355 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.761979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.762061 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.762081 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.762133 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.762197 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.772249 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.784575 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.796238 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.808273 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.846396 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.865047 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.865084 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.865095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.865112 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.865124 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.873076 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.889428 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.900165 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.926299 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.938019 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.956583 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.966910 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.966949 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.966958 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.966972 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.966984 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:11Z","lastTransitionTime":"2026-02-18T19:19:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.971563 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:11 crc kubenswrapper[4754]: I0218 19:19:11.991341 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.008105 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:12Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.069046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.069111 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.069126 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.069164 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.069181 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.171533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.171575 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.171587 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.171603 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.171614 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.209527 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:38:21.454140035 +0000 UTC Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.273911 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.273948 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.274003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.274023 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.274034 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.376202 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.376265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.376281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.376303 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.376321 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.478581 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.478625 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.478637 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.478657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.478669 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.581049 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.581160 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.581179 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.581200 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.581217 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.684423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.684510 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.684530 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.684560 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.684578 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.787671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.787759 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.787777 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.787796 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.787811 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.891299 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.891380 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.891391 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.891413 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.891427 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.899213 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:12 crc kubenswrapper[4754]: E0218 19:19:12.899368 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:12 crc kubenswrapper[4754]: E0218 19:19:12.899451 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:28.899426429 +0000 UTC m=+71.349839225 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.995670 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.995726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.995737 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.995754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:12 crc kubenswrapper[4754]: I0218 19:19:12.995766 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:12Z","lastTransitionTime":"2026-02-18T19:19:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.099218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.099306 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.099321 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.099343 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.099355 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.203525 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.203592 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.203614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.203644 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.203668 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.209244 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:13 crc kubenswrapper[4754]: E0218 19:19:13.209453 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.209489 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.209539 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.209576 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:13 crc kubenswrapper[4754]: E0218 19:19:13.209636 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.209677 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:38:58.506293627 +0000 UTC Feb 18 19:19:13 crc kubenswrapper[4754]: E0218 19:19:13.209812 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:13 crc kubenswrapper[4754]: E0218 19:19:13.210028 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.307305 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.307348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.307360 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.307381 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.307397 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.410848 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.410899 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.410911 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.410931 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.410945 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.514992 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.515087 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.515106 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.515129 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.515175 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.620486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.620582 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.620601 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.620630 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.620650 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.723663 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.723699 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.723733 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.723747 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.723756 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.825836 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.825876 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.825888 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.825904 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.825916 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.928920 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.929223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.929318 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.929412 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:13 crc kubenswrapper[4754]: I0218 19:19:13.929491 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:13Z","lastTransitionTime":"2026-02-18T19:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.032053 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.032093 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.032104 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.032121 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.032131 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.135020 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.135425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.135599 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.135761 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.135897 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.210629 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:43:46.802914364 +0000 UTC Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.239357 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.239422 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.239442 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.239471 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.239496 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.299925 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.300013 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.300035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.300068 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.300094 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: E0218 19:19:14.323245 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:14Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.328686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.328754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.328764 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.328781 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.328791 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: E0218 19:19:14.347548 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:14Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.352805 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.352984 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.353025 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.353060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.353085 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: E0218 19:19:14.372999 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:14Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.379005 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.379061 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.379079 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.379114 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.379131 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: E0218 19:19:14.396262 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:14Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.402457 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.402622 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.402643 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.402669 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.402723 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: E0218 19:19:14.422683 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:14Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:14 crc kubenswrapper[4754]: E0218 19:19:14.423183 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.425533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.425775 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.425960 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.426119 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.426312 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.530472 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.531091 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.531341 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.531672 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.532000 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.634885 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.635248 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.635393 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.635529 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.635670 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.739712 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.740060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.740204 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.740289 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.740358 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.843389 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.843687 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.843756 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.843847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.843919 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.946445 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.946772 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.946843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.946920 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:14 crc kubenswrapper[4754]: I0218 19:19:14.946999 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:14Z","lastTransitionTime":"2026-02-18T19:19:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.049994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.050059 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.050077 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.050106 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.050203 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.153367 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.153430 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.153450 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.153475 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.153494 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.208905 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.209012 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.208935 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.208920 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:15 crc kubenswrapper[4754]: E0218 19:19:15.209111 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:15 crc kubenswrapper[4754]: E0218 19:19:15.209272 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:15 crc kubenswrapper[4754]: E0218 19:19:15.209388 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:15 crc kubenswrapper[4754]: E0218 19:19:15.209488 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.212097 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:12:19.989973277 +0000 UTC Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.257586 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.257644 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.257663 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.257692 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.257713 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.360498 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.360824 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.361007 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.361254 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.361486 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.464900 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.465434 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.465634 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.465826 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.466009 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.568821 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.569240 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.569516 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.569951 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.570278 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.673814 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.673888 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.673909 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.673937 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.673955 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.776977 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.777041 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.777058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.777082 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.777099 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.879606 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.879655 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.879672 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.879694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.879711 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.983552 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.983635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.983656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.983687 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:15 crc kubenswrapper[4754]: I0218 19:19:15.983708 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:15Z","lastTransitionTime":"2026-02-18T19:19:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.087981 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.088077 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.088110 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.088178 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.088199 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.192453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.192503 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.192514 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.192531 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.192542 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.210206 4754 scope.go:117] "RemoveContainer" containerID="294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.212433 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:39:33.801274296 +0000 UTC Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.295246 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.295289 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.295297 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.295312 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.295323 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.399640 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.399715 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.399726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.399759 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.399774 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.502806 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.502847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.502855 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.502867 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.502877 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.606183 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.606222 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.606233 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.606252 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.606264 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.613455 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/1.log" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.617207 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.618572 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.636948 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.668040 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.686591 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.707473 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.709593 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.709618 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.709630 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.709650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.709665 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.726158 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.737919 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.753439 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.766888 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.780177 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.793870 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.812044 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.812113 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.812128 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.812171 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.812188 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.813465 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.826901 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.846567 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.860621 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.876919 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.895255 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.910308 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.914948 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.915001 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.915013 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.915035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:16 crc kubenswrapper[4754]: I0218 19:19:16.915051 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:16Z","lastTransitionTime":"2026-02-18T19:19:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.018328 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.018385 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.018401 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.018423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.018439 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.121921 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.122384 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.122402 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.122431 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.122448 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.208837 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:17 crc kubenswrapper[4754]: E0218 19:19:17.208977 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.209088 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.209234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.209265 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:17 crc kubenswrapper[4754]: E0218 19:19:17.209397 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:17 crc kubenswrapper[4754]: E0218 19:19:17.209597 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:17 crc kubenswrapper[4754]: E0218 19:19:17.209782 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.214122 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:04:17.836895871 +0000 UTC Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.225758 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.225801 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.225814 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.225832 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.225846 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.338164 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.338228 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.338245 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.338264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.338281 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.441759 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.441833 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.441847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.441871 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.441889 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.544672 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.544737 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.544755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.544778 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.544796 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.623935 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/2.log" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.625244 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/1.log" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.628319 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d" exitCode=1 Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.628413 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.628534 4754 scope.go:117] "RemoveContainer" containerID="294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.629504 4754 scope.go:117] "RemoveContainer" containerID="6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d" Feb 18 19:19:17 crc kubenswrapper[4754]: E0218 19:19:17.629766 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.648660 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.648791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.648811 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.648840 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.648895 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.652853 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.670916 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.696765 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.718171 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.737512 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.752149 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.752193 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.752206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.752223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.752233 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.754963 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.771599 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.790096 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.805588 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.820614 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.835368 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.847733 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.854848 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.854921 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.854936 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.854954 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.854964 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.863405 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.880378 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.898844 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.910347 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.923533 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.957741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.957816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.957832 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.957858 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:17 crc kubenswrapper[4754]: I0218 19:19:17.957877 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:17Z","lastTransitionTime":"2026-02-18T19:19:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.060804 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.060841 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.060849 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.060862 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.060871 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.163066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.163117 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.163125 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.163153 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.163164 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.214367 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:56:29.704876929 +0000 UTC Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.228299 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.243015 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.254525 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.265104 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.265195 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.265204 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.265224 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.265236 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.266612 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.287370 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://294cd59726b9e9aa6bc67a58c02492a80beeabe5959c083510820c12a21b21f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:57Z\\\",\\\"message\\\":\\\" (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372588 6235 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372766 6235 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.372794 6235 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 19:18:57.372886 6235 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 19:18:57.372961 6235 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373130 6235 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 19:18:57.373847 6235 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 19:18:57.373895 6235 factory.go:656] Stopping watch factory\\\\nI0218 19:18:57.373919 6235 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.297416 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.314822 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.333408 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.347103 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.359264 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.367009 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.367048 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.367058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.367074 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.367085 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.372672 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.389322 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.403123 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.416889 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.431693 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.441746 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.451283 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.469792 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.469843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.469861 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.469892 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.469908 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.572827 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.572887 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.572896 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.572910 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.572920 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.633778 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/2.log" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.636907 4754 scope.go:117] "RemoveContainer" containerID="6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d" Feb 18 19:19:18 crc kubenswrapper[4754]: E0218 19:19:18.637127 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.657972 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.674096 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.675413 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.675446 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.675460 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.675480 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.675492 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.690539 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.706657 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.731232 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.742789 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.755424 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.768443 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.778647 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.778693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.778705 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.778724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.778740 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.779776 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.794266 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.806487 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.817054 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.825896 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.837492 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.848315 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.858875 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.871259 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:18Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.880923 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.880951 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.880960 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.880973 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.880983 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.983436 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.983518 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.983531 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.983553 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:18 crc kubenswrapper[4754]: I0218 19:19:18.983567 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:18Z","lastTransitionTime":"2026-02-18T19:19:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.086661 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.086714 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.086726 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.086747 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.086758 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.188971 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.189012 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.189023 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.189038 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.189050 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.209288 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.209340 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.209395 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.209437 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:19 crc kubenswrapper[4754]: E0218 19:19:19.209546 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:19 crc kubenswrapper[4754]: E0218 19:19:19.209623 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:19 crc kubenswrapper[4754]: E0218 19:19:19.209696 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:19 crc kubenswrapper[4754]: E0218 19:19:19.209757 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.214876 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:12:39.579757988 +0000 UTC Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.291754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.291800 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.291811 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.291826 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.291835 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.394612 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.394657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.394671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.394724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.394737 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.496821 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.496880 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.496891 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.496913 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.496922 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.599391 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.599444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.599453 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.599465 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.599473 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.710085 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.710121 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.710128 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.710154 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.710168 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.812210 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.812245 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.812256 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.812269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.812277 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.916004 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.916320 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.916329 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.916343 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:19 crc kubenswrapper[4754]: I0218 19:19:19.916354 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:19Z","lastTransitionTime":"2026-02-18T19:19:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.018442 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.018490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.018501 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.018519 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.018531 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.121205 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.121254 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.121264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.121281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.121292 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.215376 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:24:41.577163253 +0000 UTC Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.223071 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.223192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.223256 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.223295 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.223318 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.325988 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.326050 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.326062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.326080 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.326093 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.429094 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.429163 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.429179 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.429199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.429213 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.532119 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.532206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.532218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.532236 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.532249 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.635246 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.635290 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.635314 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.635332 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.635345 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.738437 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.738505 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.738529 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.738561 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.738585 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.841633 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.841707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.841727 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.841754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.841773 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.944732 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.944801 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.944897 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.944975 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:20 crc kubenswrapper[4754]: I0218 19:19:20.945019 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:20Z","lastTransitionTime":"2026-02-18T19:19:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.052746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.052825 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.052840 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.052866 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.052885 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.156428 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.156473 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.156489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.156511 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.156526 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.208870 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:21 crc kubenswrapper[4754]: E0218 19:19:21.209007 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.209077 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:21 crc kubenswrapper[4754]: E0218 19:19:21.209136 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.209225 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:21 crc kubenswrapper[4754]: E0218 19:19:21.209292 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.209338 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:21 crc kubenswrapper[4754]: E0218 19:19:21.209394 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.215710 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:51:24.18235587 +0000 UTC Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.258832 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.258858 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.258866 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.258878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.258886 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.361905 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.361988 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.362006 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.362032 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.362050 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.464221 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.464318 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.464336 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.464360 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.464379 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.566491 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.566533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.566544 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.566561 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.566575 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.669046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.669089 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.669101 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.669119 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.669131 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.771558 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.771613 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.771630 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.771654 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.771671 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.874125 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.874206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.874218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.874236 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.874250 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.977060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.977162 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.977185 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.977210 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:21 crc kubenswrapper[4754]: I0218 19:19:21.977225 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:21Z","lastTransitionTime":"2026-02-18T19:19:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.080045 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.080130 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.080223 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.080252 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.080276 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.183062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.183116 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.183125 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.183152 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.183160 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.216886 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:03:18.73587808 +0000 UTC Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.286066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.286132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.286155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.286171 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.286180 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.388945 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.388979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.388989 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.389003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.389014 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.491480 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.491557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.491572 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.491596 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.491612 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.594656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.594735 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.594754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.595292 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.595362 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.699880 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.699933 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.699950 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.700017 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.700239 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.802604 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.802671 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.802685 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.802701 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.802712 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.905503 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.905586 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.905621 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.905656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:22 crc kubenswrapper[4754]: I0218 19:19:22.905680 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:22Z","lastTransitionTime":"2026-02-18T19:19:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.009227 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.009683 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.009725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.009759 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.009784 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.112187 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.112247 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.112263 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.112282 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.112295 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.209532 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.209656 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.209575 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.209563 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:23 crc kubenswrapper[4754]: E0218 19:19:23.209797 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:23 crc kubenswrapper[4754]: E0218 19:19:23.210059 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:23 crc kubenswrapper[4754]: E0218 19:19:23.209954 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:23 crc kubenswrapper[4754]: E0218 19:19:23.210208 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.215112 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.215209 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.215230 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.215261 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.215281 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.217509 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:20:28.361409395 +0000 UTC Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.318171 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.318202 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.318211 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.318240 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.318250 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.420996 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.421051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.421064 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.421080 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.421091 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.523428 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.523462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.523474 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.523490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.523502 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.625500 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.625539 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.625550 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.625565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.625576 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.729320 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.729542 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.729621 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.729707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.729760 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.833276 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.833330 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.833342 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.833364 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.833377 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.936739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.936795 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.936805 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.936822 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:23 crc kubenswrapper[4754]: I0218 19:19:23.936832 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:23Z","lastTransitionTime":"2026-02-18T19:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.039543 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.039625 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.039649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.039679 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.039697 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.142638 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.142684 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.142696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.142712 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.142724 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.218579 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:25:07.589858394 +0000 UTC Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.245425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.245495 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.245515 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.245534 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.245546 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.348044 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.348082 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.348092 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.348107 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.348118 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.450788 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.450831 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.450843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.450860 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.450870 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.552960 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.553003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.553012 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.553025 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.553035 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.655006 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.655052 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.655062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.655079 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.655090 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.675783 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.675859 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.675876 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.675904 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.675923 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: E0218 19:19:24.689898 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.694755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.694803 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.694813 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.694834 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.694849 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: E0218 19:19:24.708993 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.714379 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.714438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.714451 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.714472 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.714488 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: E0218 19:19:24.728169 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.736273 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.736324 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.736339 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.736362 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.736376 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: E0218 19:19:24.748446 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.752067 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.752098 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.752110 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.752126 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.752135 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: E0218 19:19:24.763864 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:24Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:24 crc kubenswrapper[4754]: E0218 19:19:24.763984 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.765570 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.765595 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.765605 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.765619 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.765629 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.867809 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.867837 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.867846 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.867859 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.867868 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.970483 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.970525 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.970540 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.970557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:24 crc kubenswrapper[4754]: I0218 19:19:24.970566 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:24Z","lastTransitionTime":"2026-02-18T19:19:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.073304 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.073347 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.073358 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.073374 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.073385 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.176103 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.176154 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.176167 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.176184 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.176194 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.208754 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.208799 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.208799 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.208868 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:25 crc kubenswrapper[4754]: E0218 19:19:25.208973 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:25 crc kubenswrapper[4754]: E0218 19:19:25.209040 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:25 crc kubenswrapper[4754]: E0218 19:19:25.209173 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:25 crc kubenswrapper[4754]: E0218 19:19:25.209253 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.219470 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:19:23.160603592 +0000 UTC Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.278532 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.278580 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.278592 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.278609 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.278625 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.381758 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.381840 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.381852 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.381868 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.381879 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.485621 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.485668 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.485693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.485719 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.485729 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.589098 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.589186 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.589199 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.589218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.589231 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.692005 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.692062 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.692076 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.692095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.692109 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.794706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.795127 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.795271 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.795344 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.795408 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.898822 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.898878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.898889 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.898906 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:25 crc kubenswrapper[4754]: I0218 19:19:25.898918 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:25Z","lastTransitionTime":"2026-02-18T19:19:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.002490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.002543 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.002557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.002573 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.002582 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.105835 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.105897 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.105906 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.105922 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.105931 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.208212 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.208290 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.208316 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.208346 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.208367 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.220211 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:33:21.480975601 +0000 UTC Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.310707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.310795 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.310814 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.310838 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.310855 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.413452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.413504 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.413517 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.413533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.413546 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.515782 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.515884 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.515903 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.515933 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.515955 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.617994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.618035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.618045 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.618061 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.618076 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.720962 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.721008 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.721021 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.721036 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.721046 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.823857 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.823941 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.823956 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.823985 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.824016 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.926441 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.926500 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.926514 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.926532 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:26 crc kubenswrapper[4754]: I0218 19:19:26.926545 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:26Z","lastTransitionTime":"2026-02-18T19:19:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.029863 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.029922 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.029935 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.029955 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.029973 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.132063 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.132100 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.132111 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.132125 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.132134 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.208870 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.208962 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:27 crc kubenswrapper[4754]: E0218 19:19:27.209058 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.209090 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:27 crc kubenswrapper[4754]: E0218 19:19:27.209173 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.209197 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:27 crc kubenswrapper[4754]: E0218 19:19:27.209228 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:27 crc kubenswrapper[4754]: E0218 19:19:27.209406 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.221045 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:00:26.442906106 +0000 UTC Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.235408 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.235486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.235498 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.235522 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.235543 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.339489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.339586 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.339613 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.339649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.339677 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.442657 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.442709 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.442719 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.442735 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.442748 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.546565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.546611 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.546622 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.546644 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.546662 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.651232 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.651322 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.651348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.651384 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.651411 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.753818 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.753899 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.753914 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.753934 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.753947 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.857376 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.857448 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.857462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.857487 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.857506 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.960743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.960819 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.960837 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.960866 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:27 crc kubenswrapper[4754]: I0218 19:19:27.960890 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:27Z","lastTransitionTime":"2026-02-18T19:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.063213 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.063264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.063276 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.063293 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.063309 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.167475 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.167562 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.167588 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.167620 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.167643 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.221162 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:26:51.025211649 +0000 UTC Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.239291 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.253737 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.266959 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.270686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.270753 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.270772 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.270796 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.270810 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.288735 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.307811 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.323362 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.339770 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.352822 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.368990 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.375915 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.375979 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.375996 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.376017 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.376030 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.382586 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.396079 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.409234 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.423082 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.437107 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.449909 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.466068 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.479132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.479228 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.479246 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.479270 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.479287 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.485915 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.582897 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.582958 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.582972 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.582992 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.583004 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.686392 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.686450 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.686462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.686482 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.686497 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.789405 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.789463 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.789477 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.789497 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.789574 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.892481 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.892531 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.892541 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.892562 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.892574 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.994047 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:28 crc kubenswrapper[4754]: E0218 19:19:28.994471 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:28 crc kubenswrapper[4754]: E0218 19:19:28.994681 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:00.994634564 +0000 UTC m=+103.445047360 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.997517 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.997546 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.997557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.997578 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:28 crc kubenswrapper[4754]: I0218 19:19:28.997591 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:28Z","lastTransitionTime":"2026-02-18T19:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.100847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.100901 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.100915 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.100938 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.100952 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.204290 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.204656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.204743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.204831 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.204910 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.209708 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.209708 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.209737 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.209748 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:29 crc kubenswrapper[4754]: E0218 19:19:29.209947 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:29 crc kubenswrapper[4754]: E0218 19:19:29.210135 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:29 crc kubenswrapper[4754]: E0218 19:19:29.210226 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:29 crc kubenswrapper[4754]: E0218 19:19:29.210325 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.221610 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:16:05.889885326 +0000 UTC Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.307641 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.307692 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.307705 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.307724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.307739 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.420319 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.420384 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.420398 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.420421 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.420438 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.524299 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.524404 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.524425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.524452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.524476 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.627087 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.627182 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.627198 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.627227 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.627244 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.732219 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.732270 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.732279 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.732296 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.732326 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.834649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.834708 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.834720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.834739 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.834752 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.937155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.937204 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.937214 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.937235 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:29 crc kubenswrapper[4754]: I0218 19:19:29.937255 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:29Z","lastTransitionTime":"2026-02-18T19:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.040743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.040874 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.040896 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.040929 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.040954 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.144616 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.144656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.144666 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.144683 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.144693 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.221705 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:39:27.190722086 +0000 UTC Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.246635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.246676 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.246685 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.246702 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.246714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.350571 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.350619 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.350628 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.350644 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.350657 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.454641 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.454694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.454704 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.454723 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.454734 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.558161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.558206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.558215 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.558233 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.558244 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.661077 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.661131 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.661162 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.661196 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.661224 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.765976 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.766066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.766087 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.766118 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.766175 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.869230 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.869274 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.869288 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.869309 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.869325 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.972219 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.972278 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.972292 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.972322 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:30 crc kubenswrapper[4754]: I0218 19:19:30.972338 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:30Z","lastTransitionTime":"2026-02-18T19:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.075452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.075510 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.075521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.075540 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.075554 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.178522 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.178571 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.178580 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.178601 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.178613 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.209498 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:31 crc kubenswrapper[4754]: E0218 19:19:31.209692 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.209963 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:31 crc kubenswrapper[4754]: E0218 19:19:31.210027 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.210186 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:31 crc kubenswrapper[4754]: E0218 19:19:31.210238 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.210354 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:31 crc kubenswrapper[4754]: E0218 19:19:31.210408 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.222223 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:07:18.172609966 +0000 UTC Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.281268 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.281352 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.281363 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.281386 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.281399 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.383998 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.384051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.384069 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.384094 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.384112 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.487492 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.487590 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.487614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.487651 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.487677 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.591863 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.591930 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.591950 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.591986 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.592014 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.695754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.695808 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.695819 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.695843 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.695863 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.799107 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.799203 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.799218 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.799245 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.799260 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.902743 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.902806 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.902820 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.902841 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:31 crc kubenswrapper[4754]: I0218 19:19:31.902853 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:31Z","lastTransitionTime":"2026-02-18T19:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.006646 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.006709 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.006722 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.006741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.006753 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.111197 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.111335 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.111375 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.111409 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.111431 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.218521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.218580 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.218598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.218625 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.218643 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.222317 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:50:55.273914301 +0000 UTC Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.321738 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.321773 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.321789 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.321811 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.321830 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.425367 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.425434 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.425450 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.425844 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.425898 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.528211 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.528259 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.528269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.528293 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.528306 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.630831 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.630867 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.630878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.630891 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.630901 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.734344 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.734411 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.734425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.734444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.734454 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.838380 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.838847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.838973 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.839096 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.839256 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.942865 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.943452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.943719 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.943928 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:32 crc kubenswrapper[4754]: I0218 19:19:32.944069 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:32Z","lastTransitionTime":"2026-02-18T19:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.046842 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.047293 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.047404 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.047524 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.047611 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.151503 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.152090 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.152326 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.152511 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.152638 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.208973 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:33 crc kubenswrapper[4754]: E0218 19:19:33.209476 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.209798 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:33 crc kubenswrapper[4754]: E0218 19:19:33.209954 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.210215 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:33 crc kubenswrapper[4754]: E0218 19:19:33.210460 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.211816 4754 scope.go:117] "RemoveContainer" containerID="6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d" Feb 18 19:19:33 crc kubenswrapper[4754]: E0218 19:19:33.212060 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.212340 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:33 crc kubenswrapper[4754]: E0218 19:19:33.212505 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.223117 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:24:49.152651543 +0000 UTC Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.256444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.256725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.256847 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.257014 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.257133 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.359405 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.359858 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.360084 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.360322 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.360616 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.463963 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.464035 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.464058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.464089 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.464111 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.567641 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.567694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.567703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.567718 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.567729 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.670437 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.670473 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.670481 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.670494 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.670504 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.774683 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.774728 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.774738 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.774754 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.774765 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.877989 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.878264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.878326 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.878426 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.878486 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.981282 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.981557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.981691 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.981780 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:33 crc kubenswrapper[4754]: I0218 19:19:33.981878 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:33Z","lastTransitionTime":"2026-02-18T19:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.085275 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.085347 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.085363 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.085382 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.085399 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.187530 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.187574 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.187582 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.187595 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.187604 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.221529 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.223782 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:27:12.85233095 +0000 UTC Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.290917 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.290962 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.290972 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.291005 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.291019 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.393650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.393686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.393696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.393710 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.393724 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.496354 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.496708 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.496909 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.496999 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.497068 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.600377 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.600423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.600434 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.600451 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.600463 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.686896 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/0.log" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.686979 4754 generic.go:334] "Generic (PLEG): container finished" podID="55244610-cf2e-4b72-b8b7-9d55898fbb62" containerID="a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1" exitCode=1 Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.687121 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerDied","Data":"a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.687999 4754 scope.go:117] "RemoveContainer" containerID="a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.705099 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.705164 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.705179 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.705197 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.705212 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.707378 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.725241 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.741197 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.756605 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.782855 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.794270 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.807971 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.808015 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.808029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.808046 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.808060 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.809270 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.820098 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.834856 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.846493 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.859376 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.869082 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.880301 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.890744 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.899700 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.909809 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.910769 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.910791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.910801 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.910816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.910829 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:34Z","lastTransitionTime":"2026-02-18T19:19:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.920096 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:34 crc kubenswrapper[4754]: I0218 19:19:34.930733 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:34Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.012972 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.013012 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.013023 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.013038 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.013049 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.023737 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.023771 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.023782 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.023797 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.023807 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.037250 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.040958 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.041002 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.041016 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.041036 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.041048 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.054460 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.058190 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.058249 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.058258 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.058273 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.058282 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.070233 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.073407 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.073443 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.073454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.073470 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.073481 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.084776 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.087622 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.087662 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.087675 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.087693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.087706 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.101598 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.101719 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.114993 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.115031 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.115075 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.115089 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.115098 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.209165 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.209235 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.209178 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.209174 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.209292 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.209390 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.209483 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:35 crc kubenswrapper[4754]: E0218 19:19:35.209516 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.217887 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.217918 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.217930 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.217944 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.217955 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.224172 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:49:08.886369711 +0000 UTC Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.321516 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.321589 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.321602 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.321627 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.321646 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.424015 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.424055 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.424066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.424082 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.424097 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.526369 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.526425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.526438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.526458 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.526471 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.629067 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.629342 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.629423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.629486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.629579 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.694225 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/0.log" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.694747 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerStarted","Data":"1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.713678 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.731575 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.733248 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.733322 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.733343 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.733377 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.733399 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.746196 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.761393 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.782605 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.798587 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.810365 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.829401 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.835895 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.835941 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.835954 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.835973 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.835983 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.841742 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.854699 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.874123 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.895576 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.913576 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.927437 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.938724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.938794 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.938810 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.938830 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.938848 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:35Z","lastTransitionTime":"2026-02-18T19:19:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.944770 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.958902 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.970089 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:35 crc kubenswrapper[4754]: I0218 19:19:35.992194 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.041673 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.041850 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.041951 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.042067 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.042164 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.145426 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.145639 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.145696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.145755 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.145846 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.225281 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:27:35.785863415 +0000 UTC Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.248258 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.248337 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.248353 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.248377 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.248393 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.351700 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.351773 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.351791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.351816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.351832 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.455419 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.455738 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.455822 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.455938 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.456024 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.558684 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.558725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.558738 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.558762 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.558775 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.661635 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.661719 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.661734 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.661757 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.661774 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.765114 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.765192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.765208 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.765227 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.765240 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.868451 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.868516 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.868528 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.868551 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.868565 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.971871 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.971940 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.971952 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.971970 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:36 crc kubenswrapper[4754]: I0218 19:19:36.971980 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:36Z","lastTransitionTime":"2026-02-18T19:19:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.075446 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.075497 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.075507 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.075523 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.075538 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.179823 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.179899 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.179917 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.179952 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.179972 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.209650 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.209650 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.209738 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.209782 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:37 crc kubenswrapper[4754]: E0218 19:19:37.209975 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:37 crc kubenswrapper[4754]: E0218 19:19:37.210426 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:37 crc kubenswrapper[4754]: E0218 19:19:37.210793 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:37 crc kubenswrapper[4754]: E0218 19:19:37.210842 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.226260 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:54:53.580473026 +0000 UTC Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.231582 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.283029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.283129 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.283163 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.283196 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.283217 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.386965 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.387022 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.387043 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.387070 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.387092 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.490465 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.490537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.490553 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.490574 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.490591 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.593617 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.593706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.593720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.593745 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.593763 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.697929 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.698001 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.698016 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.698040 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.698060 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.801099 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.801191 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.801206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.801232 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.801249 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.904023 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.904319 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.904444 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.904558 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:37 crc kubenswrapper[4754]: I0218 19:19:37.904644 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:37Z","lastTransitionTime":"2026-02-18T19:19:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.007578 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.007991 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.008267 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.008487 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.008709 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.113305 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.113372 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.113385 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.113418 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.113431 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.216563 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.216943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.217179 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.217544 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.217863 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.226178 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.226781 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:41:06.696823816 +0000 UTC Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.239030 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.254177 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.266874 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.279473 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.291552 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.304378 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.315893 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.320641 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.320703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.320720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.320756 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.320770 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.329090 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.340555 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.349795 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.368880 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"879b079c-46d9-492a-bf09-b4b1f07f0977\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b77cef12b8c3593dfa85d7822513c52fa384ef7cfe71e30f24300271eb730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4594d0d5f1a9342ac7af89120cafdc12a4313bb9590198916f5da4cc2f6591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6de71f467cb3ca8c53b98156e5dcc3fcf875f6c3e51dda3cd972201f1dff27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ce9e45fb31732d884c6570779abd8e272b02d032aeaec08779843c2667c4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0fb3d296168dd7b2584edeb7c9bcb692b389837c1d6e7848a30ae36b1fca86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.380937 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.393668 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.407135 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.418716 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.424110 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.424161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.424178 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.424201 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.424213 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.435217 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.446854 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.463829 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.527471 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.527519 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.527531 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.527550 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.527563 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.630258 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.630323 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.630343 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.630368 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.630386 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.733514 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.733590 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.733609 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.733636 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.733654 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.837244 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.837319 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.837335 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.837362 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.837384 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.941040 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.941099 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.941116 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.941171 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:38 crc kubenswrapper[4754]: I0218 19:19:38.941192 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:38Z","lastTransitionTime":"2026-02-18T19:19:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.044135 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.044212 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.044226 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.044246 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.044264 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.146741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.146784 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.146798 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.146815 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.146830 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.209298 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.209480 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:39 crc kubenswrapper[4754]: E0218 19:19:39.209646 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.209712 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.209733 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:39 crc kubenswrapper[4754]: E0218 19:19:39.209891 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:39 crc kubenswrapper[4754]: E0218 19:19:39.209975 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:39 crc kubenswrapper[4754]: E0218 19:19:39.210131 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.228341 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:49:34.620231233 +0000 UTC Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.249630 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.249703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.249718 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.249746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.249765 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.352521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.352598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.352618 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.352650 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.352672 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.456274 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.456328 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.456345 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.456389 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.456409 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.559413 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.559477 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.559490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.559513 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.559528 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.662020 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.662056 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.662064 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.662078 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.662088 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.766532 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.766599 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.766611 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.766631 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.766648 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.869058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.869119 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.869132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.869177 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.869190 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.972983 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.973044 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.973057 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.973076 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:39 crc kubenswrapper[4754]: I0218 19:19:39.973090 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:39Z","lastTransitionTime":"2026-02-18T19:19:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.077581 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.077693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.077729 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.077776 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.077815 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.181349 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.181411 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.181425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.181446 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.181516 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.229185 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:18:05.523716364 +0000 UTC Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.284624 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.284887 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.284947 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.285051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.285113 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.388723 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.388779 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.388798 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.388830 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.388850 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.492072 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.492155 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.492170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.492193 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.492209 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.594616 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.594708 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.594720 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.594736 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.594794 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.697364 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.697407 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.697416 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.697430 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.697441 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.800902 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.800970 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.800988 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.801011 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.801034 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.904320 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.904399 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.904424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.904462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:40 crc kubenswrapper[4754]: I0218 19:19:40.904489 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:40Z","lastTransitionTime":"2026-02-18T19:19:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.008090 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.008130 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.008151 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.008170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.008188 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.031282 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.031447 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:45.031428024 +0000 UTC m=+147.481840810 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.110602 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.110697 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.110729 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.110767 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.110794 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.132467 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.132526 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.132561 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.132599 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132725 4754 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132793 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:45.132775897 +0000 UTC m=+147.583188693 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132880 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132917 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132933 4754 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132940 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.132988 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:45.132971623 +0000 UTC m=+147.583384439 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.133006 4754 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.133007 4754 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.133052 4754 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.133065 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:45.133056495 +0000 UTC m=+147.583469291 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.133212 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:45.133135547 +0000 UTC m=+147.583548523 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.209305 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.209444 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.209467 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.209532 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.209566 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.209774 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.209870 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:41 crc kubenswrapper[4754]: E0218 19:19:41.210091 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.214252 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.214314 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.214330 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.214349 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.214364 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.229930 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:04:43.694965976 +0000 UTC Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.317265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.317314 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.317332 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.317350 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.317362 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.420789 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.420825 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.420834 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.420850 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.420862 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.523930 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.524003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.524021 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.524050 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.524070 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.626429 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.626476 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.626489 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.626508 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.626521 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.729490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.729537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.729550 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.729567 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.729577 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.832911 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.832985 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.833003 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.833039 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.833065 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.936325 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.936400 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.936417 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.936447 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:41 crc kubenswrapper[4754]: I0218 19:19:41.936464 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:41Z","lastTransitionTime":"2026-02-18T19:19:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.039893 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.039953 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.040013 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.040090 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.040114 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.143383 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.143470 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.143497 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.143533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.143559 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.230880 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:16:09.568651498 +0000 UTC Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.246836 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.246928 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.246953 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.246991 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.247012 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.350614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.350677 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.350689 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.350712 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.350732 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.453929 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.453996 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.454008 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.454030 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.454044 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.556763 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.556800 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.556808 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.556821 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.556830 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.660451 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.660554 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.660572 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.660599 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.660617 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.772575 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.772664 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.772694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.772724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.772746 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.876329 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.876411 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.876428 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.876454 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.876475 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.979971 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.980027 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.980040 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.980065 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:42 crc kubenswrapper[4754]: I0218 19:19:42.980079 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:42Z","lastTransitionTime":"2026-02-18T19:19:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.083741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.083903 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.083925 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.083955 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.083977 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.188802 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.188864 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.188882 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.188912 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.188930 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.209399 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.209472 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.209452 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.209400 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:43 crc kubenswrapper[4754]: E0218 19:19:43.209632 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:43 crc kubenswrapper[4754]: E0218 19:19:43.209750 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:43 crc kubenswrapper[4754]: E0218 19:19:43.209842 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:43 crc kubenswrapper[4754]: E0218 19:19:43.209915 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.231541 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:29:40.535639698 +0000 UTC Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.291895 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.292018 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.292044 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.292073 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.292097 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.395735 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.395804 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.395828 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.395862 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.395889 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.499299 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.499409 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.499435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.499593 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.499618 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.602972 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.603045 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.603067 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.603097 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.603118 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.705390 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.705427 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.705435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.705452 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.705463 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.809083 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.809136 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.809161 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.809181 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.809195 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.911844 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.911896 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.911909 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.911944 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:43 crc kubenswrapper[4754]: I0218 19:19:43.911958 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:43Z","lastTransitionTime":"2026-02-18T19:19:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.015099 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.015193 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.015213 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.015237 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.015256 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.119225 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.119276 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.119291 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.119310 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.119322 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.221528 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.221581 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.221596 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.221615 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.221629 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.232166 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:49:48.70940623 +0000 UTC Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.325482 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.325558 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.325571 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.325600 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.325616 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.428655 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.428718 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.428729 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.428753 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.428765 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.533071 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.533181 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.533206 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.533235 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.533255 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.636542 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.636600 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.636612 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.636665 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.636679 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.739895 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.739975 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.739996 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.740024 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.740043 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.843639 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.843723 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.843741 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.843769 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.843787 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.947183 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.947265 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.947284 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.947313 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:44 crc kubenswrapper[4754]: I0218 19:19:44.947342 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:44Z","lastTransitionTime":"2026-02-18T19:19:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.050778 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.050855 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.050880 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.050915 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.050941 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.154947 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.154994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.155004 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.155021 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.155036 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.208858 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.208937 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.208906 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.208861 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.209081 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.209270 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.209835 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.209934 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.224911 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.225022 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.225099 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.225192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.225230 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.232765 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:58:52.486159282 +0000 UTC Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.256246 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.261613 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.261643 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.261678 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.261696 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.261707 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.279040 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.283263 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.283295 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.283305 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.283320 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.283329 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.299070 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.302881 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.302922 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.302937 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.302955 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.302968 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.318273 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.321654 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.321707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.321724 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.321747 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.321764 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.346561 4754 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b2b83d7-b7bf-4d49-9f49-d7ce420be65a\\\",\\\"systemUUID\\\":\\\"bca81bce-8907-42d1-98a5-0dfb89b9f859\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:45 crc kubenswrapper[4754]: E0218 19:19:45.346790 4754 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.348830 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.348888 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.348906 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.348928 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.348945 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.451036 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.451116 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.451132 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.451182 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.451196 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.554558 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.554588 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.554598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.554614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.554625 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.656942 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.656994 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.657006 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.657028 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.657042 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.760317 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.760353 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.760363 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.760379 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:45 crc kubenswrapper[4754]: I0218 19:19:45.760390 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:45Z","lastTransitionTime":"2026-02-18T19:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.391592 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:46 crc kubenswrapper[4754]: E0218 19:19:46.391704 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.392467 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:09:12.164996367 +0000 UTC Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.393519 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:46 crc kubenswrapper[4754]: E0218 19:19:46.393624 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.395213 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.395254 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.395264 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.395277 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.395286 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:46Z","lastTransitionTime":"2026-02-18T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.497496 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.497534 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.497544 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.497563 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.497573 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:46Z","lastTransitionTime":"2026-02-18T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.600204 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.600244 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.600254 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.600269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.600279 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:46Z","lastTransitionTime":"2026-02-18T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.702980 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.703053 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.703066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.703085 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.703097 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:46Z","lastTransitionTime":"2026-02-18T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.805925 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.805982 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.805993 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.806009 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.806022 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:46Z","lastTransitionTime":"2026-02-18T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.909341 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.909385 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.909410 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.909425 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:46 crc kubenswrapper[4754]: I0218 19:19:46.909435 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:46Z","lastTransitionTime":"2026-02-18T19:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.012456 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.012530 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.012553 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.012581 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.012601 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.115308 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.115340 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.115349 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.115362 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.115372 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.209405 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.209433 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:47 crc kubenswrapper[4754]: E0218 19:19:47.209758 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:47 crc kubenswrapper[4754]: E0218 19:19:47.209845 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.210031 4754 scope.go:117] "RemoveContainer" containerID="6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.216748 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.216785 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.216803 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.216819 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.216831 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.321304 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.321361 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.321375 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.321395 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.321414 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.393436 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:51:25.805337346 +0000 UTC Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.423610 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.423643 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.423656 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.423674 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.423687 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.526366 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.526409 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.526421 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.526437 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.526449 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.629329 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.629378 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.629391 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.629410 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.629422 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.732273 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.732327 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.732338 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.732355 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.732370 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.744861 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/2.log" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.748364 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.748757 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.768905 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"879b079c-46d9-492a-bf09-b4b1f07f0977\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b77cef12b8c3593dfa85d7822513c52fa384ef7cfe71e30f24300271eb730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4594d0d5f1a9342ac7af89120cafdc12a4313bb9590198916f5da4cc2f6591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6de71f467cb3ca8c53b98156e5dcc3fcf875f6c3e51dda3cd972201f1dff27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ce9e45fb31732d884c6570779abd8e272b02d032aeaec08779843c2667c4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0fb3d296168dd7b2584edeb7c9bcb692b389837c1d6e7848a30ae36b1fca86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.781265 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.792859 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.806774 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.821601 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.835296 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.835331 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.835342 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.835357 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.835368 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.841599 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.861159 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.884284 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.932124 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.937405 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.937447 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.937459 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.937476 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.937487 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:47Z","lastTransitionTime":"2026-02-18T19:19:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.948523 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.964326 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.974570 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.987624 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:47 crc kubenswrapper[4754]: I0218 19:19:47.998257 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.010623 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.023559 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.034685 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.039654 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.039694 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.039706 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.039725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.039735 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.046883 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.059442 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.141709 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.142072 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.142085 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.142103 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.142117 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.209103 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.209103 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:48 crc kubenswrapper[4754]: E0218 19:19:48.209270 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:48 crc kubenswrapper[4754]: E0218 19:19:48.209346 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.221183 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.233933 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.245483 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.245563 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.245588 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.245617 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.245647 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.249599 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.263788 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.280765 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.301458 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"879b079c-46d9-492a-bf09-b4b1f07f0977\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b77cef12b8c3593dfa85d7822513c52fa384ef7cfe71e30f24300271eb730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4594d0d5f1a9342ac7af89120cafdc12a4313bb9590198916f5da4cc2f6591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6de71f467cb3ca8c53b98156e5dcc3fcf875f6c3e51dda3cd972201f1dff27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ce9e45fb31732d884c6570779abd8e272b02d032aeaec08779843c2667c4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0fb3d296168dd7b2584edeb7c9bcb692b389837c1d6e7848a30ae36b1fca86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.325880 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.347205 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.348342 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.348398 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.348413 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.348438 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.348451 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.375963 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.394278 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:05:20.924827305 +0000 UTC Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.394545 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.423572 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.435569 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.450672 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.451435 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.451462 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.451472 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.451490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.451504 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.470500 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.486656 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.508117 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.522531 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.533785 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.544347 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.554519 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.554600 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.554619 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.554645 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.554662 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.657590 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.657636 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.657647 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.657665 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.657675 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.754501 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/3.log" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.755237 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/2.log" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.758966 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" exitCode=1 Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759054 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759190 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759212 4754 scope.go:117] "RemoveContainer" containerID="6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759217 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759366 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759394 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.759409 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.760464 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:19:48 crc kubenswrapper[4754]: E0218 19:19:48.760952 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.785425 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"879b079c-46d9-492a-bf09-b4b1f07f0977\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b77cef12b8c3593dfa85d7822513c52fa384ef7cfe71e30f24300271eb730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4594d0d5f1a9342ac7af89120cafdc12a4313bb9590198916f5da4cc2f6591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6de71f467cb3ca8c53b98156e5dcc3fcf875f6c3e51dda3cd972201f1dff27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ce9e45fb31732d884c6570779abd8e272b02d032aeaec08779843c2667c4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0fb3d296168dd7b2584edeb7c9bcb692b389837c1d6e7848a30ae36b1fca86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.802896 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.819079 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.836861 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.852933 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.862793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.862865 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.862884 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.862913 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.862935 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.880331 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a41f4a2d2ef01e1daeba350344bbba35b8a23639e453faa6aa52cdaf212013d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:17Z\\\",\\\"message\\\":\\\"s:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string(nil)}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI0218 19:19:17.102077 6422 lb_config.go:1031] Cluster endpoints for openshift-dns/dns-default for network=default are: map[]\\\\nI0218 19:19:17.102114 6422 services_controller.go:443] Built service openshift-dns/dns-default LB cluster-wide configs for network=default: []servi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:48Z\\\",\\\"message\\\":\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:19:48.093717 6801 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 19:19:48.093767 6801 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.895086 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.915454 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.932046 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.946100 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.965268 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.965985 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.966029 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.966042 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.966061 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.966073 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:48Z","lastTransitionTime":"2026-02-18T19:19:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.977473 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:48 crc kubenswrapper[4754]: I0218 19:19:48.990724 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.003707 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.018962 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.033612 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.046058 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.057977 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.068324 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.068363 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.068371 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.068387 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.068397 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.071953 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.171354 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.171412 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.171424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.171439 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.171450 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.209078 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.209114 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:49 crc kubenswrapper[4754]: E0218 19:19:49.209253 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:49 crc kubenswrapper[4754]: E0218 19:19:49.209347 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.274502 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.274652 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.274681 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.274707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.274724 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.378521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.378588 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.378601 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.378629 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.378643 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.395076 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:28:32.642146097 +0000 UTC Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.483099 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.483340 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.483364 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.483441 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.483473 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.587533 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.587623 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.587654 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.587689 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.587714 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.691193 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.691269 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.691294 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.691332 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.691359 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.767023 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/3.log" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.775566 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:19:49 crc kubenswrapper[4754]: E0218 19:19:49.775871 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.792641 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gpz55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35524782-f487-48c5-ae76-a9065bb810c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6abb441e86110081c070db9f528e3a1b13f8227241c2d42a474edb7bafe248de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4jtck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gpz55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.794772 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.794820 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.794829 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.794846 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.794860 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.810217 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"830ec484-c66a-4273-919a-af677d24c80c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://929b356ff22b18cd399a74996f06a0e380fce9cc55e2a8e2dfd38a150b288e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://947e10c5546cf19e81d764aab108062a5aab40e80d9234c82be1c2b6ac4fc182\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcecd17b53f031abf9f2d6f31ab84f65ec50dd402fa19633e5ea08590d97e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.842408 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"879b079c-46d9-492a-bf09-b4b1f07f0977\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46b77cef12b8c3593dfa85d7822513c52fa384ef7cfe71e30f24300271eb730a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b4594d0d5f1a9342ac7af89120cafdc12a4313bb9590198916f5da4cc2f6591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e6de71f467cb3ca8c53b98156e5dcc3fcf875f6c3e51dda3cd972201f1dff27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ce9e45fb31732d884c6570779abd8e272b02d032aeaec08779843c2667c4dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0fb3d296168dd7b2584edeb7c9bcb692b389837c1d6e7848a30ae36b1fca86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710e0aebead8837db1519d0ebfb741e833f3ed7420c097f1c22d95c0d0b64083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5190b7092187e1c61ce42655a1732a4dca6ddf7fe391ebc731995ea488129cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecf526b726249235f13ea526506e3540f3468b17d59926e917dd40cfeb3fe5f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.865577 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb813d6-cecc-41a2-8649-7f47f6020d18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:18:37.777540 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:18:37.777787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:18:37.778623 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1090269622/tls.crt::/tmp/serving-cert-1090269622/tls.key\\\\\\\"\\\\nI0218 19:18:38.125020 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:18:38.133268 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:18:38.133446 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:18:38.133498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:18:38.133523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:18:38.142119 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:18:38.142161 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142166 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:18:38.142171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:18:38.142175 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:18:38.142178 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:18:38.142182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:18:38.142185 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:18:38.146868 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.883874 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.898388 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.898486 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.898514 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.898553 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.898588 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:49Z","lastTransitionTime":"2026-02-18T19:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.902791 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.923852 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.953100 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82e5683f-ada7-4578-a6e3-6f0dd72dd149\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:48Z\\\",\\\"message\\\":\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:19:48.093717 6801 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0218 19:19:48.093767 6801 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:19:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rpsvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-glx55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.969831 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f082e73e-90b3-4709-8f92-30e0e8bd69fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa0d5ed3320e375aa7ce21f39b9ad34357cc203bdf072e2d3464424ad135058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9437ec7801e5224e69e4648a5c6ae8228ce67a66fa49926879f0479a14b6e99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55dcb9c40ddbefcf612d63ca8f95a6101bcb7372164e6f35c742617062763f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd618f380f35f6609102939d14a2b6c1cd41652d763032fd7667c4d0c311f13b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:49 crc kubenswrapper[4754]: I0218 19:19:49.990614 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47a9d5e4e5b1e8f20272086865a3c16a30f3232e79638f1ecb19cbf0a240620d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2265a34b3e2ada3db4eb582f5a9f5ba58b42dd51bf58e63d3b000d3710e9d0a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.001276 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.001366 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.001395 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.001427 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.001454 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.007972 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-z5qkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f810067-9720-4365-8d1b-8831300d10ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://741e18af77e4b813a40612e755cec35d4256403370721d7874bb33f5c73d0fa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkrdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-z5qkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.024767 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84dca4a4-85d4-442f-a34d-d12df5252a65\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb6922e1af1b3714041daeb088618a757a383b9e50543e5de167d988eb9a745a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d98ed8433c5e42efc836b7c840be9fff747b566082fcef9df14bdd43de535e51\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ba9fd2111e4bcd78b303fa33cd272963f6298ddc508fdb8684e15c8f97e914b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1d38fc65cee275f5f28c53d86f1e2be0d6452758b8164a4e00de1fce58bb371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://731baa8edb074db2953974cc70ad1bf3d221e901aa5af0b990fca209e727c45e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9afcb47c1f390cc5f3a248c8b0cb558343c081f6870b2dc4c5776412ea59583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12f813e51ca0f02ba87f6e79e84b33a348822fcdf0ad2fdc07856a6780c45be3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrjj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-tpcwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.044539 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c63635c0552157b2647b788a2a320c26fd21e3a19169eea7807a1d3572d5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.063569 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d43d42232f32937dc4871907f99b56da1a1c982db7b35fb05808d0f5b03f285a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.079820 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qztvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"539505bb-b2d2-4adc-be1e-a95f73778a52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj67g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qztvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.095184 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2affe3d5-fdb0-4797-8bce-1b481530cb04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b850d1c3185dba59c230f6286f3a76135edff3786413fd586f1594847ddd600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ef9f81f8ebc17fd6b21cca8878ddb21e1cd9e8583cabbcb46042aff79b22246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:18:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.104649 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.104693 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.104707 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.104728 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.104743 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.110772 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-pp2q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55244610-cf2e-4b72-b8b7-9d55898fbb62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:19:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:19:34Z\\\",\\\"message\\\":\\\"2026-02-18T19:18:47+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7\\\\n2026-02-18T19:18:47+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_56ebceb3-c62e-4b03-8305-8cd84a918da7 to /host/opt/cni/bin/\\\\n2026-02-18T19:18:49Z [verbose] multus-daemon started\\\\n2026-02-18T19:18:49Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:19:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:19:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtgvb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-pp2q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.125951 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0094be0b03cd0e6e708ac0a06eb9a0575c806452b83485971c441a802a9fa714\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sfdps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wmjxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.141297 4754 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b8e7ce0-bf49-4935-bf1f-44df60660b11\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcbea2e4aecdfa5f47a4f95ca704c323d5db51044f15ce7f45fc8aec186ca2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66ff5b14fe4ebe106c38a9f2ef8629a9b91fcf046e408be869e344c02fee428e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9m8rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:18:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lzrmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:19:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.212104 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.212173 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:50 crc kubenswrapper[4754]: E0218 19:19:50.212448 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:50 crc kubenswrapper[4754]: E0218 19:19:50.212586 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.215390 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.216639 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.216679 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.216715 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.216746 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.320687 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.320737 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.320747 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.320765 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.320776 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.396299 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:50:52.133013155 +0000 UTC Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.423200 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.423257 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.423267 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.423284 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.423298 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.525868 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.525916 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.525926 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.525943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.525952 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.629456 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.629509 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.629521 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.629540 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.629552 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.732834 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.732916 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.732938 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.732968 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.732989 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.835983 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.836061 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.836083 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.836114 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.836135 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.939565 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.939598 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.939607 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.939623 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:50 crc kubenswrapper[4754]: I0218 19:19:50.939632 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:50Z","lastTransitionTime":"2026-02-18T19:19:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.042205 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.042238 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.042255 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.042277 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.042287 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.144612 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.144686 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.144698 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.144722 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.144736 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.208927 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:51 crc kubenswrapper[4754]: E0218 19:19:51.209058 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.209265 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:51 crc kubenswrapper[4754]: E0218 19:19:51.209320 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.247393 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.247440 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.247449 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.247464 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.247474 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.350186 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.350221 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.350229 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.350242 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.350251 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.396803 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:19:54.695348431 +0000 UTC Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.453430 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.453484 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.453537 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.453557 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.453570 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.556286 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.556325 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.556333 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.556348 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.556358 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.659850 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.659900 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.659914 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.659956 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.659983 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.762619 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.762670 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.762682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.762703 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.762717 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.865007 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.865042 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.865051 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.865066 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.865075 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.967692 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.967725 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.967733 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.967746 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:51 crc kubenswrapper[4754]: I0218 19:19:51.967755 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:51Z","lastTransitionTime":"2026-02-18T19:19:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.070779 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.070820 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.070831 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.070851 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.070862 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.173249 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.173289 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.173306 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.173328 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.173343 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.208803 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.208835 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:52 crc kubenswrapper[4754]: E0218 19:19:52.208922 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:52 crc kubenswrapper[4754]: E0218 19:19:52.209023 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.276007 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.276058 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.276074 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.276096 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.276126 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.378236 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.378309 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.378332 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.378357 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.378374 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.397868 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 07:30:48.649964248 +0000 UTC Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.480682 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.480718 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.480729 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.480744 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.480755 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.582789 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.583209 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.583423 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.583878 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.583916 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.686060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.686176 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.686198 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.686225 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.686242 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.789006 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.789067 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.789076 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.789088 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.789099 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.891214 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.891274 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.891288 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.891305 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.891315 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.994751 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.994791 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.994802 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.994818 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:52 crc kubenswrapper[4754]: I0218 19:19:52.994827 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:52Z","lastTransitionTime":"2026-02-18T19:19:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.097193 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.097234 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.097243 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.097260 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.097268 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.200006 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.200048 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.200060 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.200076 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.200087 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.209303 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.209358 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:53 crc kubenswrapper[4754]: E0218 19:19:53.209436 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:53 crc kubenswrapper[4754]: E0218 19:19:53.209491 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.302614 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.302660 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.302669 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.302683 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.302692 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.398562 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:57:35.552768957 +0000 UTC Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.406367 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.406601 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.406669 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.406735 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.406797 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.509197 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.509456 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.509530 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.509612 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.509686 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.612896 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.612943 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.612980 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.612999 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.613011 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.715345 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.715385 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.715396 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.715416 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.715430 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.817866 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.817934 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.817944 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.817958 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.817985 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.920268 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.920317 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.920329 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.920344 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:53 crc kubenswrapper[4754]: I0218 19:19:53.920354 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:53Z","lastTransitionTime":"2026-02-18T19:19:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.023722 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.023782 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.023793 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.024196 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.024371 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.126950 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.127211 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.127220 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.127234 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.127243 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.208674 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.208676 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:54 crc kubenswrapper[4754]: E0218 19:19:54.208835 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:54 crc kubenswrapper[4754]: E0218 19:19:54.208923 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.229428 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.229481 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.229490 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.229507 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.229517 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.331511 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.331551 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.331562 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.331576 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.331587 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.398766 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:16:38.078057463 +0000 UTC Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.433761 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.433803 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.433816 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.433833 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.433845 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.536818 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.536877 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.536885 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.536900 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.536910 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.639374 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.639413 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.639422 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.639436 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.639446 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.742095 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.742138 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.742178 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.742192 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.742200 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.844774 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.844826 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.844842 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.844865 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.844883 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.947602 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.947636 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.947655 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.947668 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:54 crc kubenswrapper[4754]: I0218 19:19:54.947677 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:54Z","lastTransitionTime":"2026-02-18T19:19:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.050170 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.050207 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.050219 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.050236 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.050247 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:55Z","lastTransitionTime":"2026-02-18T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.153319 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.153403 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.153424 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.153455 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.153486 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:55Z","lastTransitionTime":"2026-02-18T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.209492 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.209503 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:55 crc kubenswrapper[4754]: E0218 19:19:55.209867 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:55 crc kubenswrapper[4754]: E0218 19:19:55.209997 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.256255 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.256287 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.256297 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.256310 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.256321 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:55Z","lastTransitionTime":"2026-02-18T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.359370 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.359412 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.359432 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.359447 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.359458 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:55Z","lastTransitionTime":"2026-02-18T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.398920 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:23:24.121013789 +0000 UTC Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.463039 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.463113 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.463131 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.463195 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.463214 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:55Z","lastTransitionTime":"2026-02-18T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.514188 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.514261 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.514281 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.514305 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.514322 4754 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:19:55Z","lastTransitionTime":"2026-02-18T19:19:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.573280 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n"] Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.573782 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.576721 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.576823 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.577244 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.577629 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.582977 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.583035 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.583199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.583243 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.583284 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.596727 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.59670442 podStartE2EDuration="1m14.59670442s" podCreationTimestamp="2026-02-18 19:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.596617067 +0000 UTC m=+98.047029863" watchObservedRunningTime="2026-02-18 19:19:55.59670442 +0000 UTC m=+98.047117236" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.623423 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.623394529 podStartE2EDuration="18.623394529s" podCreationTimestamp="2026-02-18 19:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.621192996 +0000 UTC m=+98.071605802" watchObservedRunningTime="2026-02-18 19:19:55.623394529 +0000 UTC m=+98.073807345" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.641909 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.641885727 podStartE2EDuration="1m17.641885727s" podCreationTimestamp="2026-02-18 19:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.641438324 +0000 UTC m=+98.091851150" watchObservedRunningTime="2026-02-18 19:19:55.641885727 +0000 UTC m=+98.092298533" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.684441 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.684531 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.684559 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.684591 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.684608 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.684820 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.685468 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.685516 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.690864 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.701932 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76d57b6f-f45a-4da5-ba8e-e287ae1d249e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n272n\" (UID: \"76d57b6f-f45a-4da5-ba8e-e287ae1d249e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.740933 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gpz55" podStartSLOduration=73.740907495 podStartE2EDuration="1m13.740907495s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.725524783 +0000 UTC m=+98.175937599" watchObservedRunningTime="2026-02-18 19:19:55.740907495 +0000 UTC m=+98.191320291" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.741064 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.741059499 podStartE2EDuration="45.741059499s" podCreationTimestamp="2026-02-18 19:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.741027058 +0000 UTC m=+98.191439854" watchObservedRunningTime="2026-02-18 19:19:55.741059499 +0000 UTC m=+98.191472295" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.785403 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-z5qkd" podStartSLOduration=73.785385603 podStartE2EDuration="1m13.785385603s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.767359716 +0000 UTC m=+98.217772522" watchObservedRunningTime="2026-02-18 19:19:55.785385603 +0000 UTC m=+98.235798399" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.785510 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-tpcwn" podStartSLOduration=73.785506156 podStartE2EDuration="1m13.785506156s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.78529796 +0000 UTC m=+98.235710756" watchObservedRunningTime="2026-02-18 19:19:55.785506156 +0000 UTC m=+98.235918952" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.855297 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.855277523 podStartE2EDuration="21.855277523s" podCreationTimestamp="2026-02-18 19:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.842042102 +0000 UTC m=+98.292454898" watchObservedRunningTime="2026-02-18 19:19:55.855277523 +0000 UTC m=+98.305690319" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.855420 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-pp2q2" podStartSLOduration=73.855414337 podStartE2EDuration="1m13.855414337s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.855205851 +0000 UTC m=+98.305618667" watchObservedRunningTime="2026-02-18 19:19:55.855414337 +0000 UTC m=+98.305827133" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.871490 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podStartSLOduration=73.871468317 podStartE2EDuration="1m13.871468317s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.870762497 +0000 UTC m=+98.321175293" watchObservedRunningTime="2026-02-18 19:19:55.871468317 +0000 UTC m=+98.321881113" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.885008 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lzrmf" podStartSLOduration=73.884990576 podStartE2EDuration="1m13.884990576s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.88444027 +0000 UTC m=+98.334853066" watchObservedRunningTime="2026-02-18 19:19:55.884990576 +0000 UTC m=+98.335403372" Feb 18 19:19:55 crc kubenswrapper[4754]: I0218 19:19:55.888333 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.209370 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.209386 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:56 crc kubenswrapper[4754]: E0218 19:19:56.209557 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:56 crc kubenswrapper[4754]: E0218 19:19:56.209650 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.399718 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:33:38.713486175 +0000 UTC Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.399774 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.407528 4754 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.795510 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" event={"ID":"76d57b6f-f45a-4da5-ba8e-e287ae1d249e","Type":"ContainerStarted","Data":"cdd562ca772ffddc56a1f019920d14d94338df90abd92c2696bb305504908881"} Feb 18 19:19:56 crc kubenswrapper[4754]: I0218 19:19:56.795561 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" event={"ID":"76d57b6f-f45a-4da5-ba8e-e287ae1d249e","Type":"ContainerStarted","Data":"b03f6ed44e0ac769c8663b92db4544d5a88ee3be4126f32bdef5ebf1ea052128"} Feb 18 19:19:57 crc kubenswrapper[4754]: I0218 19:19:57.208939 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:57 crc kubenswrapper[4754]: I0218 19:19:57.208959 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:57 crc kubenswrapper[4754]: E0218 19:19:57.209072 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:57 crc kubenswrapper[4754]: E0218 19:19:57.209179 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:19:58 crc kubenswrapper[4754]: I0218 19:19:58.208646 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:58 crc kubenswrapper[4754]: I0218 19:19:58.208654 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:58 crc kubenswrapper[4754]: E0218 19:19:58.209560 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:58 crc kubenswrapper[4754]: E0218 19:19:58.209698 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:59 crc kubenswrapper[4754]: I0218 19:19:59.209005 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:59 crc kubenswrapper[4754]: E0218 19:19:59.209156 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:59 crc kubenswrapper[4754]: I0218 19:19:59.209716 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:19:59 crc kubenswrapper[4754]: E0218 19:19:59.209930 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:00 crc kubenswrapper[4754]: I0218 19:20:00.209106 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:00 crc kubenswrapper[4754]: I0218 19:20:00.209120 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:00 crc kubenswrapper[4754]: E0218 19:20:00.209244 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:00 crc kubenswrapper[4754]: E0218 19:20:00.209426 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:01 crc kubenswrapper[4754]: I0218 19:20:01.037359 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:01 crc kubenswrapper[4754]: E0218 19:20:01.037518 4754 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:20:01 crc kubenswrapper[4754]: E0218 19:20:01.037608 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs podName:539505bb-b2d2-4adc-be1e-a95f73778a52 nodeName:}" failed. No retries permitted until 2026-02-18 19:21:05.037590459 +0000 UTC m=+167.488003255 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs") pod "network-metrics-daemon-qztvz" (UID: "539505bb-b2d2-4adc-be1e-a95f73778a52") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:20:01 crc kubenswrapper[4754]: I0218 19:20:01.208904 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:01 crc kubenswrapper[4754]: E0218 19:20:01.209051 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:01 crc kubenswrapper[4754]: I0218 19:20:01.209258 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:01 crc kubenswrapper[4754]: E0218 19:20:01.209326 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:02 crc kubenswrapper[4754]: I0218 19:20:02.209289 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:02 crc kubenswrapper[4754]: I0218 19:20:02.209597 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:02 crc kubenswrapper[4754]: E0218 19:20:02.209667 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:02 crc kubenswrapper[4754]: E0218 19:20:02.209734 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:03 crc kubenswrapper[4754]: I0218 19:20:03.209473 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:03 crc kubenswrapper[4754]: I0218 19:20:03.209473 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:03 crc kubenswrapper[4754]: E0218 19:20:03.209664 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:03 crc kubenswrapper[4754]: E0218 19:20:03.209792 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:04 crc kubenswrapper[4754]: I0218 19:20:04.209727 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:04 crc kubenswrapper[4754]: I0218 19:20:04.209835 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:04 crc kubenswrapper[4754]: E0218 19:20:04.210227 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:04 crc kubenswrapper[4754]: E0218 19:20:04.210323 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:04 crc kubenswrapper[4754]: I0218 19:20:04.210521 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:20:04 crc kubenswrapper[4754]: E0218 19:20:04.210687 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:20:05 crc kubenswrapper[4754]: I0218 19:20:05.209619 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:05 crc kubenswrapper[4754]: E0218 19:20:05.209841 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:05 crc kubenswrapper[4754]: I0218 19:20:05.210293 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:05 crc kubenswrapper[4754]: E0218 19:20:05.210461 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:06 crc kubenswrapper[4754]: I0218 19:20:06.209053 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:06 crc kubenswrapper[4754]: I0218 19:20:06.209157 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:06 crc kubenswrapper[4754]: E0218 19:20:06.209397 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:06 crc kubenswrapper[4754]: E0218 19:20:06.209596 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:07 crc kubenswrapper[4754]: I0218 19:20:07.209009 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:07 crc kubenswrapper[4754]: I0218 19:20:07.209006 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:07 crc kubenswrapper[4754]: E0218 19:20:07.209215 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:07 crc kubenswrapper[4754]: E0218 19:20:07.209361 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:08 crc kubenswrapper[4754]: I0218 19:20:08.209541 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:08 crc kubenswrapper[4754]: I0218 19:20:08.209570 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:08 crc kubenswrapper[4754]: E0218 19:20:08.210629 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:08 crc kubenswrapper[4754]: E0218 19:20:08.210685 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:09 crc kubenswrapper[4754]: I0218 19:20:09.208888 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:09 crc kubenswrapper[4754]: I0218 19:20:09.208905 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:09 crc kubenswrapper[4754]: E0218 19:20:09.209445 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:09 crc kubenswrapper[4754]: E0218 19:20:09.209541 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:10 crc kubenswrapper[4754]: I0218 19:20:10.209436 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:10 crc kubenswrapper[4754]: I0218 19:20:10.209445 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:10 crc kubenswrapper[4754]: E0218 19:20:10.209574 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:10 crc kubenswrapper[4754]: E0218 19:20:10.209697 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:11 crc kubenswrapper[4754]: I0218 19:20:11.209174 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:11 crc kubenswrapper[4754]: E0218 19:20:11.209260 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:11 crc kubenswrapper[4754]: I0218 19:20:11.209304 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:11 crc kubenswrapper[4754]: E0218 19:20:11.209471 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:12 crc kubenswrapper[4754]: I0218 19:20:12.209398 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:12 crc kubenswrapper[4754]: E0218 19:20:12.209662 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:12 crc kubenswrapper[4754]: I0218 19:20:12.210133 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:12 crc kubenswrapper[4754]: E0218 19:20:12.210309 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:13 crc kubenswrapper[4754]: I0218 19:20:13.208809 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:13 crc kubenswrapper[4754]: I0218 19:20:13.208919 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:13 crc kubenswrapper[4754]: E0218 19:20:13.209031 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:13 crc kubenswrapper[4754]: E0218 19:20:13.209112 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:14 crc kubenswrapper[4754]: I0218 19:20:14.209279 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:14 crc kubenswrapper[4754]: I0218 19:20:14.209387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:14 crc kubenswrapper[4754]: E0218 19:20:14.209456 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:14 crc kubenswrapper[4754]: E0218 19:20:14.209594 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:15 crc kubenswrapper[4754]: I0218 19:20:15.208805 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:15 crc kubenswrapper[4754]: I0218 19:20:15.208805 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:15 crc kubenswrapper[4754]: E0218 19:20:15.209128 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:15 crc kubenswrapper[4754]: E0218 19:20:15.209267 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:16 crc kubenswrapper[4754]: I0218 19:20:16.209360 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:16 crc kubenswrapper[4754]: I0218 19:20:16.209465 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:16 crc kubenswrapper[4754]: E0218 19:20:16.209514 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:16 crc kubenswrapper[4754]: E0218 19:20:16.209663 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:17 crc kubenswrapper[4754]: I0218 19:20:17.209174 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:17 crc kubenswrapper[4754]: I0218 19:20:17.209263 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:17 crc kubenswrapper[4754]: E0218 19:20:17.209415 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:17 crc kubenswrapper[4754]: E0218 19:20:17.209762 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:18 crc kubenswrapper[4754]: E0218 19:20:18.156068 4754 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 19:20:18 crc kubenswrapper[4754]: I0218 19:20:18.208751 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:18 crc kubenswrapper[4754]: I0218 19:20:18.208754 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:18 crc kubenswrapper[4754]: E0218 19:20:18.210735 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:18 crc kubenswrapper[4754]: E0218 19:20:18.210966 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:18 crc kubenswrapper[4754]: E0218 19:20:18.324087 4754 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:20:19 crc kubenswrapper[4754]: I0218 19:20:19.209590 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:19 crc kubenswrapper[4754]: I0218 19:20:19.209651 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:19 crc kubenswrapper[4754]: E0218 19:20:19.209946 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:19 crc kubenswrapper[4754]: I0218 19:20:19.210126 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:20:19 crc kubenswrapper[4754]: E0218 19:20:19.210131 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:19 crc kubenswrapper[4754]: E0218 19:20:19.210309 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-glx55_openshift-ovn-kubernetes(82e5683f-ada7-4578-a6e3-6f0dd72dd149)\"" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.209539 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.209591 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:20 crc kubenswrapper[4754]: E0218 19:20:20.209704 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:20 crc kubenswrapper[4754]: E0218 19:20:20.209869 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.875221 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/1.log" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.875856 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/0.log" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.875917 4754 generic.go:334] "Generic (PLEG): container finished" podID="55244610-cf2e-4b72-b8b7-9d55898fbb62" containerID="1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6" exitCode=1 Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.875955 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerDied","Data":"1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6"} Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.875994 4754 scope.go:117] "RemoveContainer" containerID="a12a7f8630b01fec18a41e18e6b92be61c540468802c56debe9bdac5b302fed1" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.876959 4754 scope.go:117] "RemoveContainer" containerID="1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6" Feb 18 19:20:20 crc kubenswrapper[4754]: E0218 19:20:20.878527 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-pp2q2_openshift-multus(55244610-cf2e-4b72-b8b7-9d55898fbb62)\"" pod="openshift-multus/multus-pp2q2" podUID="55244610-cf2e-4b72-b8b7-9d55898fbb62" Feb 18 19:20:20 crc kubenswrapper[4754]: I0218 19:20:20.895756 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n272n" podStartSLOduration=98.89572343 podStartE2EDuration="1m38.89572343s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:56.81236533 +0000 UTC m=+99.262778126" watchObservedRunningTime="2026-02-18 19:20:20.89572343 +0000 UTC m=+123.346136236" Feb 18 19:20:21 crc kubenswrapper[4754]: I0218 19:20:21.209638 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:21 crc kubenswrapper[4754]: I0218 19:20:21.209638 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:21 crc kubenswrapper[4754]: E0218 19:20:21.209822 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:21 crc kubenswrapper[4754]: E0218 19:20:21.209875 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:21 crc kubenswrapper[4754]: I0218 19:20:21.883204 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/1.log" Feb 18 19:20:22 crc kubenswrapper[4754]: I0218 19:20:22.208947 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:22 crc kubenswrapper[4754]: E0218 19:20:22.209081 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:22 crc kubenswrapper[4754]: I0218 19:20:22.209120 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:22 crc kubenswrapper[4754]: E0218 19:20:22.209329 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:23 crc kubenswrapper[4754]: I0218 19:20:23.209007 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:23 crc kubenswrapper[4754]: E0218 19:20:23.209242 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:23 crc kubenswrapper[4754]: I0218 19:20:23.209307 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:23 crc kubenswrapper[4754]: E0218 19:20:23.209838 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:23 crc kubenswrapper[4754]: E0218 19:20:23.325905 4754 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:20:24 crc kubenswrapper[4754]: I0218 19:20:24.209087 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:24 crc kubenswrapper[4754]: E0218 19:20:24.209292 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:24 crc kubenswrapper[4754]: I0218 19:20:24.209331 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:24 crc kubenswrapper[4754]: E0218 19:20:24.209683 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:25 crc kubenswrapper[4754]: I0218 19:20:25.208647 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:25 crc kubenswrapper[4754]: I0218 19:20:25.208666 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:25 crc kubenswrapper[4754]: E0218 19:20:25.208792 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:25 crc kubenswrapper[4754]: E0218 19:20:25.208943 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:26 crc kubenswrapper[4754]: I0218 19:20:26.209488 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:26 crc kubenswrapper[4754]: I0218 19:20:26.209594 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:26 crc kubenswrapper[4754]: E0218 19:20:26.209655 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:26 crc kubenswrapper[4754]: E0218 19:20:26.209743 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:27 crc kubenswrapper[4754]: I0218 19:20:27.208818 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:27 crc kubenswrapper[4754]: I0218 19:20:27.208872 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:27 crc kubenswrapper[4754]: E0218 19:20:27.208955 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:27 crc kubenswrapper[4754]: E0218 19:20:27.209014 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:28 crc kubenswrapper[4754]: I0218 19:20:28.209075 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:28 crc kubenswrapper[4754]: E0218 19:20:28.211649 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:28 crc kubenswrapper[4754]: I0218 19:20:28.211684 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:28 crc kubenswrapper[4754]: E0218 19:20:28.211868 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:28 crc kubenswrapper[4754]: E0218 19:20:28.327585 4754 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:20:29 crc kubenswrapper[4754]: I0218 19:20:29.209015 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:29 crc kubenswrapper[4754]: I0218 19:20:29.209112 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:29 crc kubenswrapper[4754]: E0218 19:20:29.209344 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:29 crc kubenswrapper[4754]: E0218 19:20:29.209531 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:30 crc kubenswrapper[4754]: I0218 19:20:30.208936 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:30 crc kubenswrapper[4754]: I0218 19:20:30.208996 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:30 crc kubenswrapper[4754]: E0218 19:20:30.209202 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:30 crc kubenswrapper[4754]: E0218 19:20:30.209566 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:31 crc kubenswrapper[4754]: I0218 19:20:31.208909 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:31 crc kubenswrapper[4754]: I0218 19:20:31.209008 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:31 crc kubenswrapper[4754]: E0218 19:20:31.209764 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:31 crc kubenswrapper[4754]: E0218 19:20:31.209780 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:32 crc kubenswrapper[4754]: I0218 19:20:32.209806 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:32 crc kubenswrapper[4754]: I0218 19:20:32.210003 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:32 crc kubenswrapper[4754]: E0218 19:20:32.211334 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:32 crc kubenswrapper[4754]: E0218 19:20:32.211337 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.208701 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:33 crc kubenswrapper[4754]: E0218 19:20:33.208966 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.209183 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:33 crc kubenswrapper[4754]: E0218 19:20:33.209711 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.210355 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:20:33 crc kubenswrapper[4754]: E0218 19:20:33.329256 4754 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.927100 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/3.log" Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.929584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerStarted","Data":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.929973 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:20:33 crc kubenswrapper[4754]: I0218 19:20:33.958939 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podStartSLOduration=111.958908733 podStartE2EDuration="1m51.958908733s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:33.958131681 +0000 UTC m=+136.408544467" watchObservedRunningTime="2026-02-18 19:20:33.958908733 +0000 UTC m=+136.409321529" Feb 18 19:20:34 crc kubenswrapper[4754]: I0218 19:20:34.208641 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:34 crc kubenswrapper[4754]: E0218 19:20:34.208855 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:34 crc kubenswrapper[4754]: I0218 19:20:34.209127 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:34 crc kubenswrapper[4754]: E0218 19:20:34.209281 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:34 crc kubenswrapper[4754]: I0218 19:20:34.314604 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qztvz"] Feb 18 19:20:34 crc kubenswrapper[4754]: I0218 19:20:34.314713 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:34 crc kubenswrapper[4754]: E0218 19:20:34.314806 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:35 crc kubenswrapper[4754]: I0218 19:20:35.208978 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:35 crc kubenswrapper[4754]: I0218 19:20:35.209382 4754 scope.go:117] "RemoveContainer" containerID="1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6" Feb 18 19:20:35 crc kubenswrapper[4754]: E0218 19:20:35.209454 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:35 crc kubenswrapper[4754]: I0218 19:20:35.937129 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/1.log" Feb 18 19:20:35 crc kubenswrapper[4754]: I0218 19:20:35.937291 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerStarted","Data":"fa5805441467198b1c86089bf816b3cb2a9e7b35ed917649659cc4f52c6e1b00"} Feb 18 19:20:36 crc kubenswrapper[4754]: I0218 19:20:36.209284 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:36 crc kubenswrapper[4754]: I0218 19:20:36.209303 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:36 crc kubenswrapper[4754]: I0218 19:20:36.209387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:36 crc kubenswrapper[4754]: E0218 19:20:36.209564 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:36 crc kubenswrapper[4754]: E0218 19:20:36.209785 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:36 crc kubenswrapper[4754]: E0218 19:20:36.209960 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:37 crc kubenswrapper[4754]: I0218 19:20:37.209387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:37 crc kubenswrapper[4754]: E0218 19:20:37.209658 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:20:38 crc kubenswrapper[4754]: I0218 19:20:38.209185 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:38 crc kubenswrapper[4754]: I0218 19:20:38.209267 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:38 crc kubenswrapper[4754]: I0218 19:20:38.209185 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:38 crc kubenswrapper[4754]: E0218 19:20:38.210031 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qztvz" podUID="539505bb-b2d2-4adc-be1e-a95f73778a52" Feb 18 19:20:38 crc kubenswrapper[4754]: E0218 19:20:38.210338 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:20:38 crc kubenswrapper[4754]: E0218 19:20:38.210759 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:20:39 crc kubenswrapper[4754]: I0218 19:20:39.209398 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:39 crc kubenswrapper[4754]: I0218 19:20:39.212499 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:20:39 crc kubenswrapper[4754]: I0218 19:20:39.213342 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.209284 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.209342 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.209350 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.212548 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.212785 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.213693 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:20:40 crc kubenswrapper[4754]: I0218 19:20:40.214255 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.061535 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:45 crc kubenswrapper[4754]: E0218 19:20:45.061714 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:22:47.061671085 +0000 UTC m=+269.512083921 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.164206 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.164308 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.164362 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.164425 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.165766 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.175386 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.178039 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.178315 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.231703 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.338684 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.357946 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.978789 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a9525a8abff4a5ad84951005b7028b19178222d97e1b9e4c9bd7e53a26888a2e"} Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.979382 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e519c92596333f6046b558316869de945c5c6d98bf4a513a477f54f6e44772be"} Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.983754 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8c52f6cd2b22f3950243d6460cd796345979f09b8b0ee73bc6b7af0117d230fe"} Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.983917 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bb0b1acdd96d65ae6d7a53c172b97932c7bbc5881c69233fa547c0be3032e5c8"} Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.987598 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9f0eb852194817c84eef7a7657ca7c4cd3df177bac474126a10727e915afce75"} Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.987659 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"20956926f9418106f6ca5eec9a7c0e35e29ccc4328efe9d2ddd28a07d5dae594"} Feb 18 19:20:45 crc kubenswrapper[4754]: I0218 19:20:45.988326 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.593586 4754 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.667471 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9txkp"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.668068 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.674428 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lbzqr"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.696030 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.697035 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.697371 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.697667 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k7hhc"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.698454 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.698459 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.698928 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xchrj"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.699399 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.700359 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.709769 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.710693 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.711451 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.712060 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.712553 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.712865 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.713439 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.733449 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.735481 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27w4f"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.736313 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.765663 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.774905 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.775634 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776009 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776093 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776353 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776393 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776424 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776507 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776597 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776624 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776670 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776788 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776987 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777011 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777124 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777168 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.776794 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777245 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777363 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777476 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777477 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.777913 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jk4zv"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.778388 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.782779 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.782983 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.783117 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.783279 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.783415 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.785516 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.785613 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.791531 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.791730 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.791907 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.792055 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.792215 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.793864 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.794500 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796604 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-audit-policies\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796656 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258ps\" (UniqueName: \"kubernetes.io/projected/9b594229-eae0-42db-b340-0f23b9158cda-kube-api-access-258ps\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796689 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-serving-cert\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796714 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-audit\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796735 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-config\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796916 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-service-ca-bundle\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796957 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-client-ca\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.796984 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-audit-dir\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03fe0d37-55e7-485b-9ac2-b0289b860a8a-metrics-tls\") pod \"dns-operator-744455d44c-27w4f\" (UID: \"03fe0d37-55e7-485b-9ac2-b0289b860a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797036 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c481ff0-d16c-4791-a274-d17c7269f430-serving-cert\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797064 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b594229-eae0-42db-b340-0f23b9158cda-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-config\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797128 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797172 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrlt4\" (UniqueName: \"kubernetes.io/projected/0c481ff0-d16c-4791-a274-d17c7269f430-kube-api-access-nrlt4\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797203 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fll9q\" (UniqueName: \"kubernetes.io/projected/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-kube-api-access-fll9q\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797234 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-node-pullsecrets\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797262 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-encryption-config\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797294 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvkl\" (UniqueName: \"kubernetes.io/projected/8a63c533-3225-429d-b1d0-d3c8592a71f1-kube-api-access-llvkl\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797319 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnzvs\" (UniqueName: \"kubernetes.io/projected/9afb1c84-e82c-43ec-9104-03fba7d404ef-kube-api-access-dnzvs\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797341 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a63c533-3225-429d-b1d0-d3c8592a71f1-images\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797365 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-etcd-client\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797397 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-encryption-config\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797422 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-etcd-serving-ca\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797444 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-image-import-ca\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9afb1c84-e82c-43ec-9104-03fba7d404ef-audit-dir\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797511 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a63c533-3225-429d-b1d0-d3c8592a71f1-config\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797545 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b594229-eae0-42db-b340-0f23b9158cda-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797575 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67dcbf0b-40be-4fae-967b-d049b796d2f5-serving-cert\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797602 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsgw\" (UniqueName: \"kubernetes.io/projected/03fe0d37-55e7-485b-9ac2-b0289b860a8a-kube-api-access-vjsgw\") pod \"dns-operator-744455d44c-27w4f\" (UID: \"03fe0d37-55e7-485b-9ac2-b0289b860a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797625 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797657 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-serving-cert\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797661 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797685 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-config\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797719 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brppz\" (UniqueName: \"kubernetes.io/projected/67dcbf0b-40be-4fae-967b-d049b796d2f5-kube-api-access-brppz\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797746 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-client-ca\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797828 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-config\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797859 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-serving-cert\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a63c533-3225-429d-b1d0-d3c8592a71f1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797911 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pr76\" (UniqueName: \"kubernetes.io/projected/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-kube-api-access-4pr76\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797930 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798065 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798260 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798288 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798332 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798399 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.797933 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798473 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.798498 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-etcd-client\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.817261 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.817597 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.817960 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.818076 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.818076 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.818895 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.828230 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.830157 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.830703 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.832251 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.832494 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.832601 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833208 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833376 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833497 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833563 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833688 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833851 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.837300 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.837800 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838168 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838266 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838415 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838454 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838532 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838649 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.838762 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833507 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.833690 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.842763 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.842914 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.865534 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.879335 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.879671 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.880007 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.880429 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.880507 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.880597 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.880591 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.880832 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.881108 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.881414 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.881798 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lt44t"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.881896 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.882441 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.882857 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-znncb"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.883030 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.883789 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.885451 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.888124 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bs2kd"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889033 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889110 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889218 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889598 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889616 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889819 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.889844 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.890290 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.891680 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.891779 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.891860 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.891938 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.892515 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.892669 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.892899 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.893039 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.893257 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.893618 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.893754 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.893992 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.894721 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.899214 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-72dh6"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.899837 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.899995 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900472 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900494 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900543 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900685 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900725 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9afb1c84-e82c-43ec-9104-03fba7d404ef-audit-dir\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900750 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a63c533-3225-429d-b1d0-d3c8592a71f1-config\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900770 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900780 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwpf\" (UniqueName: \"kubernetes.io/projected/6876f05d-5d39-418b-8697-4dfbd5600c92-kube-api-access-2nwpf\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900804 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5327f-8708-421b-a361-cb948df8f801-config\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900816 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9afb1c84-e82c-43ec-9104-03fba7d404ef-audit-dir\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900824 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b594229-eae0-42db-b340-0f23b9158cda-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900882 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsgw\" (UniqueName: \"kubernetes.io/projected/03fe0d37-55e7-485b-9ac2-b0289b860a8a-kube-api-access-vjsgw\") pod \"dns-operator-744455d44c-27w4f\" (UID: \"03fe0d37-55e7-485b-9ac2-b0289b860a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900905 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67dcbf0b-40be-4fae-967b-d049b796d2f5-serving-cert\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900960 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-config\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.900983 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34a2348f-c1db-4d94-9943-d616526bd03b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jcngv\" (UID: \"34a2348f-c1db-4d94-9943-d616526bd03b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901008 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brppz\" (UniqueName: \"kubernetes.io/projected/67dcbf0b-40be-4fae-967b-d049b796d2f5-kube-api-access-brppz\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901026 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-serving-cert\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901048 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a61192e1-b3ed-4fc0-80fc-499fe120edb4-auth-proxy-config\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901106 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4a5327f-8708-421b-a361-cb948df8f801-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901125 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-client-ca\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901164 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-config\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901181 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a61192e1-b3ed-4fc0-80fc-499fe120edb4-machine-approver-tls\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901206 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a63c533-3225-429d-b1d0-d3c8592a71f1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901228 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pr76\" (UniqueName: \"kubernetes.io/projected/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-kube-api-access-4pr76\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-serving-cert\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901291 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-etcd-client\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901308 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-client\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-service-ca\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901351 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-audit-policies\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901370 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258ps\" (UniqueName: \"kubernetes.io/projected/9b594229-eae0-42db-b340-0f23b9158cda-kube-api-access-258ps\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901390 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-serving-cert\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-audit\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901452 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dad887-5ca2-42ea-8e86-357ee76b0b51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khqnk\" (UniqueName: \"kubernetes.io/projected/a3dad887-5ca2-42ea-8e86-357ee76b0b51-kube-api-access-khqnk\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901516 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-config\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901542 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901562 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-client-ca\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-audit-dir\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901604 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-service-ca-bundle\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901625 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03fe0d37-55e7-485b-9ac2-b0289b860a8a-metrics-tls\") pod \"dns-operator-744455d44c-27w4f\" (UID: \"03fe0d37-55e7-485b-9ac2-b0289b860a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901647 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c481ff0-d16c-4791-a274-d17c7269f430-serving-cert\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901669 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3dad887-5ca2-42ea-8e86-357ee76b0b51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901687 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-config\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901712 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b594229-eae0-42db-b340-0f23b9158cda-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901720 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-config\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901736 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrlt4\" (UniqueName: \"kubernetes.io/projected/0c481ff0-d16c-4791-a274-d17c7269f430-kube-api-access-nrlt4\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901773 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901755 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a63c533-3225-429d-b1d0-d3c8592a71f1-config\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.902445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-audit-policies\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.902453 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.902847 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-service-ca-bundle\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.903439 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.904153 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-client-ca\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.904231 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-audit-dir\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.904285 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c481ff0-d16c-4791-a274-d17c7269f430-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.905178 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-audit\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.905288 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.905381 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.905619 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b594229-eae0-42db-b340-0f23b9158cda-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.906943 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-46gfl"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.907107 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-config\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.907418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-client-ca\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.908096 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-config\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.908427 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.908906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9afb1c84-e82c-43ec-9104-03fba7d404ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b594229-eae0-42db-b340-0f23b9158cda-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.901779 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fll9q\" (UniqueName: \"kubernetes.io/projected/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-kube-api-access-fll9q\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909582 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/19ba4f7e-bcca-4d8a-99f2-77e00a2eb255-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-72sjp\" (UID: \"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909612 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rskbd\" (UniqueName: \"kubernetes.io/projected/a61192e1-b3ed-4fc0-80fc-499fe120edb4-kube-api-access-rskbd\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909648 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-node-pullsecrets\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909664 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-encryption-config\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909679 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72jwg\" (UniqueName: \"kubernetes.io/projected/19ba4f7e-bcca-4d8a-99f2-77e00a2eb255-kube-api-access-72jwg\") pod \"cluster-samples-operator-665b6dd947-72sjp\" (UID: \"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909700 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6876f05d-5d39-418b-8697-4dfbd5600c92-serving-cert\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llvkl\" (UniqueName: \"kubernetes.io/projected/8a63c533-3225-429d-b1d0-d3c8592a71f1-kube-api-access-llvkl\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909744 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a63c533-3225-429d-b1d0-d3c8592a71f1-images\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909760 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-etcd-client\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909779 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64s5d\" (UniqueName: \"kubernetes.io/projected/bf0d5d00-668b-495d-8433-03a0b9853804-kube-api-access-64s5d\") pod \"migrator-59844c95c7-srtrk\" (UID: \"bf0d5d00-668b-495d-8433-03a0b9853804\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909803 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-encryption-config\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909821 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnzvs\" (UniqueName: \"kubernetes.io/projected/9afb1c84-e82c-43ec-9104-03fba7d404ef-kube-api-access-dnzvs\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909843 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5327f-8708-421b-a361-cb948df8f801-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909864 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-etcd-serving-ca\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909880 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-image-import-ca\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909901 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dad887-5ca2-42ea-8e86-357ee76b0b51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909924 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61192e1-b3ed-4fc0-80fc-499fe120edb4-config\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909949 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grprw\" (UniqueName: \"kubernetes.io/projected/34a2348f-c1db-4d94-9943-d616526bd03b-kube-api-access-grprw\") pod \"control-plane-machine-set-operator-78cbb6b69f-jcngv\" (UID: \"34a2348f-c1db-4d94-9943-d616526bd03b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909970 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-config\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.909989 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-ca\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.915182 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.915289 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a63c533-3225-429d-b1d0-d3c8592a71f1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.916432 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a63c533-3225-429d-b1d0-d3c8592a71f1-images\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.916935 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-etcd-serving-ca\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.931424 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-node-pullsecrets\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.935356 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.937206 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67dcbf0b-40be-4fae-967b-d049b796d2f5-serving-cert\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.937583 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-serving-cert\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.939589 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.917475 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-serving-cert\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.942571 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c481ff0-d16c-4791-a274-d17c7269f430-serving-cert\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.942736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-encryption-config\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.942800 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/03fe0d37-55e7-485b-9ac2-b0289b860a8a-metrics-tls\") pod \"dns-operator-744455d44c-27w4f\" (UID: \"03fe0d37-55e7-485b-9ac2-b0289b860a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.942922 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-etcd-client\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.943238 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-etcd-client\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.944640 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.948041 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.949792 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-serving-cert\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.956785 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.959069 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-image-import-ca\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.985411 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.985905 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9afb1c84-e82c-43ec-9104-03fba7d404ef-encryption-config\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.986134 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.986723 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-config\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.987403 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.989177 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.989187 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.989902 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.990662 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.992583 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.994252 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.994496 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qcqwx"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.995103 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.995476 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.995679 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.995866 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.996742 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.997068 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.997095 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.997212 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nszgz"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.997728 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.998915 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27w4f"] Feb 18 19:20:46 crc kubenswrapper[4754]: I0218 19:20:46.999942 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wljc4"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.000404 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.001430 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.001818 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.002456 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.002812 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.004288 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9txkp"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.005547 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-chrt8"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.006122 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.008299 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.008871 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.009805 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xchrj"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010764 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010831 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhws\" (UniqueName: \"kubernetes.io/projected/5b43afdc-1af8-457c-9218-d416a0bdadc3-kube-api-access-lkhws\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010864 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64s5d\" (UniqueName: \"kubernetes.io/projected/bf0d5d00-668b-495d-8433-03a0b9853804-kube-api-access-64s5d\") pod \"migrator-59844c95c7-srtrk\" (UID: \"bf0d5d00-668b-495d-8433-03a0b9853804\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010898 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5327f-8708-421b-a361-cb948df8f801-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010923 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010949 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59f7de33-73f2-480a-bc50-42be734c1764-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010974 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-dir\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.010999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dad887-5ca2-42ea-8e86-357ee76b0b51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011024 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61192e1-b3ed-4fc0-80fc-499fe120edb4-config\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011049 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grprw\" (UniqueName: \"kubernetes.io/projected/34a2348f-c1db-4d94-9943-d616526bd03b-kube-api-access-grprw\") pod \"control-plane-machine-set-operator-78cbb6b69f-jcngv\" (UID: \"34a2348f-c1db-4d94-9943-d616526bd03b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011077 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-config\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011132 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-ca\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011179 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwpf\" (UniqueName: \"kubernetes.io/projected/6876f05d-5d39-418b-8697-4dfbd5600c92-kube-api-access-2nwpf\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5327f-8708-421b-a361-cb948df8f801-config\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011230 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c59f4aff-f25f-4882-92fa-3f033eb9b614-proxy-tls\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011264 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011304 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34a2348f-c1db-4d94-9943-d616526bd03b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jcngv\" (UID: \"34a2348f-c1db-4d94-9943-d616526bd03b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011334 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011366 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-policies\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011392 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fe98aac-9aed-4963-a4a9-eeaa65a11720-trusted-ca\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011416 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6rj\" (UniqueName: \"kubernetes.io/projected/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-kube-api-access-kv6rj\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011441 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4jw\" (UniqueName: \"kubernetes.io/projected/c59f4aff-f25f-4882-92fa-3f033eb9b614-kube-api-access-7l4jw\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011467 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011517 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-oauth-config\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011533 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a61192e1-b3ed-4fc0-80fc-499fe120edb4-config\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011545 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe98aac-9aed-4963-a4a9-eeaa65a11720-serving-cert\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011584 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-serving-cert\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011607 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe98aac-9aed-4963-a4a9-eeaa65a11720-config\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011626 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59f7de33-73f2-480a-bc50-42be734c1764-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011656 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b047a1-61d6-4237-93a2-82047effa98a-config\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011688 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a61192e1-b3ed-4fc0-80fc-499fe120edb4-auth-proxy-config\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011703 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4a5327f-8708-421b-a361-cb948df8f801-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a61192e1-b3ed-4fc0-80fc-499fe120edb4-machine-approver-tls\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011743 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c59f4aff-f25f-4882-92fa-3f033eb9b614-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011772 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fssp\" (UniqueName: \"kubernetes.io/projected/6d0c3c9b-2563-4887-a65e-d4777b64ad81-kube-api-access-7fssp\") pod \"multus-admission-controller-857f4d67dd-46gfl\" (UID: \"6d0c3c9b-2563-4887-a65e-d4777b64ad81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011790 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011810 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-client\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011827 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011845 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b43afdc-1af8-457c-9218-d416a0bdadc3-images\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.011860 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b43afdc-1af8-457c-9218-d416a0bdadc3-proxy-tls\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012205 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3dad887-5ca2-42ea-8e86-357ee76b0b51-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012291 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-config\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012364 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012428 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-ca\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012712 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-config\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012739 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-service-ca\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012756 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-service-ca\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.012773 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-srv-cert\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013194 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013256 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-serving-cert\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013296 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jp4\" (UniqueName: \"kubernetes.io/projected/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-kube-api-access-q7jp4\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6d0c3c9b-2563-4887-a65e-d4777b64ad81-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46gfl\" (UID: \"6d0c3c9b-2563-4887-a65e-d4777b64ad81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013345 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b047a1-61d6-4237-93a2-82047effa98a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013366 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013426 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a5327f-8708-421b-a361-cb948df8f801-config\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013476 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a61192e1-b3ed-4fc0-80fc-499fe120edb4-auth-proxy-config\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013687 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-service-ca\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.013707 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5m74r"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014089 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dad887-5ca2-42ea-8e86-357ee76b0b51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014117 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khqnk\" (UniqueName: \"kubernetes.io/projected/a3dad887-5ca2-42ea-8e86-357ee76b0b51-kube-api-access-khqnk\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014151 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xtrx\" (UniqueName: \"kubernetes.io/projected/5fe98aac-9aed-4963-a4a9-eeaa65a11720-kube-api-access-7xtrx\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014170 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-trusted-ca-bundle\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014189 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b43afdc-1af8-457c-9218-d416a0bdadc3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014207 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014227 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79q5\" (UniqueName: \"kubernetes.io/projected/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-kube-api-access-z79q5\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014256 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3dad887-5ca2-42ea-8e86-357ee76b0b51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014273 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014292 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014315 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014342 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/19ba4f7e-bcca-4d8a-99f2-77e00a2eb255-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-72sjp\" (UID: \"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014373 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn7vt\" (UniqueName: \"kubernetes.io/projected/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-kube-api-access-dn7vt\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014400 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rskbd\" (UniqueName: \"kubernetes.io/projected/a61192e1-b3ed-4fc0-80fc-499fe120edb4-kube-api-access-rskbd\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014417 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsg26\" (UniqueName: \"kubernetes.io/projected/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-kube-api-access-fsg26\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014432 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014449 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-oauth-serving-cert\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014470 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014496 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72jwg\" (UniqueName: \"kubernetes.io/projected/19ba4f7e-bcca-4d8a-99f2-77e00a2eb255-kube-api-access-72jwg\") pod \"cluster-samples-operator-665b6dd947-72sjp\" (UID: \"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014528 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6876f05d-5d39-418b-8697-4dfbd5600c92-serving-cert\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014553 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bzc5\" (UniqueName: \"kubernetes.io/projected/59f7de33-73f2-480a-bc50-42be734c1764-kube-api-access-2bzc5\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014570 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b047a1-61d6-4237-93a2-82047effa98a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014599 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-72j8q"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.014957 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.015542 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lbzqr"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.016593 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/34a2348f-c1db-4d94-9943-d616526bd03b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jcngv\" (UID: \"34a2348f-c1db-4d94-9943-d616526bd03b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.018043 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z5tf7"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.018414 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a61192e1-b3ed-4fc0-80fc-499fe120edb4-machine-approver-tls\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.018528 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6876f05d-5d39-418b-8697-4dfbd5600c92-etcd-client\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.018672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5327f-8708-421b-a361-cb948df8f801-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.019465 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.019602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/19ba4f7e-bcca-4d8a-99f2-77e00a2eb255-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-72sjp\" (UID: \"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.019947 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6876f05d-5d39-418b-8697-4dfbd5600c92-serving-cert\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.020690 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k7hhc"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.021326 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3dad887-5ca2-42ea-8e86-357ee76b0b51-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.022041 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jk4zv"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.023338 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.024763 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bs2kd"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.025757 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.026903 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.028172 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.029233 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.030512 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.030727 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.031903 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.034853 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.036613 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7n89w"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.037272 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.037622 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qcqwx"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.038619 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.040503 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.054671 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-72dh6"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.057774 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5m74r"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.059716 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.061509 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-znncb"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.063697 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.065246 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.067071 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.069890 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.070011 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.074543 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lt44t"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.075865 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-chrt8"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.077029 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wljc4"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.078386 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.079486 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.080552 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.081589 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6vw4g"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.083886 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2hzgm"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.084055 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.084800 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-46gfl"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.084992 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.085086 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nszgz"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.086109 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.087183 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2hzgm"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.088381 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z5tf7"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.090171 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6vw4g"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.097539 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.111696 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115729 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b43afdc-1af8-457c-9218-d416a0bdadc3-proxy-tls\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115766 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-service-ca\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115792 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115820 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-serving-cert\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115842 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7jp4\" (UniqueName: \"kubernetes.io/projected/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-kube-api-access-q7jp4\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115869 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b047a1-61d6-4237-93a2-82047effa98a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115897 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-stats-auth\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115970 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xtrx\" (UniqueName: \"kubernetes.io/projected/5fe98aac-9aed-4963-a4a9-eeaa65a11720-kube-api-access-7xtrx\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.115990 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79q5\" (UniqueName: \"kubernetes.io/projected/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-kube-api-access-z79q5\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2l8c\" (UniqueName: \"kubernetes.io/projected/996b25cf-442f-475a-93aa-3957be55d4f5-kube-api-access-w2l8c\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116045 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-oauth-serving-cert\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116068 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116096 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b047a1-61d6-4237-93a2-82047effa98a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116172 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-dir\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c59f4aff-f25f-4882-92fa-3f033eb9b614-proxy-tls\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116227 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116255 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116277 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fe98aac-9aed-4963-a4a9-eeaa65a11720-trusted-ca\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116296 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-metrics-certs\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116318 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l4jw\" (UniqueName: \"kubernetes.io/projected/c59f4aff-f25f-4882-92fa-3f033eb9b614-kube-api-access-7l4jw\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116338 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe98aac-9aed-4963-a4a9-eeaa65a11720-config\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59f7de33-73f2-480a-bc50-42be734c1764-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116355 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-dir\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116830 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.116387 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b047a1-61d6-4237-93a2-82047effa98a-config\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117099 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c59f4aff-f25f-4882-92fa-3f033eb9b614-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117128 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fssp\" (UniqueName: \"kubernetes.io/projected/6d0c3c9b-2563-4887-a65e-d4777b64ad81-kube-api-access-7fssp\") pod \"multus-admission-controller-857f4d67dd-46gfl\" (UID: \"6d0c3c9b-2563-4887-a65e-d4777b64ad81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117240 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b43afdc-1af8-457c-9218-d416a0bdadc3-images\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117266 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-config\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117287 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-srv-cert\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117308 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6d0c3c9b-2563-4887-a65e-d4777b64ad81-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46gfl\" (UID: \"6d0c3c9b-2563-4887-a65e-d4777b64ad81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117573 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b047a1-61d6-4237-93a2-82047effa98a-config\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.117902 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fe98aac-9aed-4963-a4a9-eeaa65a11720-trusted-ca\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118061 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c59f4aff-f25f-4882-92fa-3f033eb9b614-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118097 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5b43afdc-1af8-457c-9218-d416a0bdadc3-images\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118193 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b43afdc-1af8-457c-9218-d416a0bdadc3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-trusted-ca-bundle\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118245 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118262 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118284 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-default-certificate\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118296 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe98aac-9aed-4963-a4a9-eeaa65a11720-config\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn7vt\" (UniqueName: \"kubernetes.io/projected/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-kube-api-access-dn7vt\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118402 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsg26\" (UniqueName: \"kubernetes.io/projected/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-kube-api-access-fsg26\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118420 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bzc5\" (UniqueName: \"kubernetes.io/projected/59f7de33-73f2-480a-bc50-42be734c1764-kube-api-access-2bzc5\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118468 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhws\" (UniqueName: \"kubernetes.io/projected/5b43afdc-1af8-457c-9218-d416a0bdadc3-kube-api-access-lkhws\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118486 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118507 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118532 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59f7de33-73f2-480a-bc50-42be734c1764-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118574 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-policies\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118595 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118676 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b43afdc-1af8-457c-9218-d416a0bdadc3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119236 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59f7de33-73f2-480a-bc50-42be734c1764-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119452 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-policies\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.118631 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-oauth-config\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119506 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv6rj\" (UniqueName: \"kubernetes.io/projected/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-kube-api-access-kv6rj\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119529 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996b25cf-442f-475a-93aa-3957be55d4f5-service-ca-bundle\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119551 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe98aac-9aed-4963-a4a9-eeaa65a11720-serving-cert\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119571 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-serving-cert\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119727 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119780 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119806 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.119977 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.120925 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.121657 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.121739 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.121933 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5b43afdc-1af8-457c-9218-d416a0bdadc3-proxy-tls\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.122118 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.122771 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.122771 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b047a1-61d6-4237-93a2-82047effa98a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.123947 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.125006 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.125452 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59f7de33-73f2-480a-bc50-42be734c1764-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.127227 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.127577 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.129497 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe98aac-9aed-4963-a4a9-eeaa65a11720-serving-cert\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.131079 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.150740 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.172121 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.190595 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.193601 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-serving-cert\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.210406 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.220953 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-stats-auth\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.221179 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2l8c\" (UniqueName: \"kubernetes.io/projected/996b25cf-442f-475a-93aa-3957be55d4f5-kube-api-access-w2l8c\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.221329 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-metrics-certs\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.221472 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-default-certificate\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.221655 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996b25cf-442f-475a-93aa-3957be55d4f5-service-ca-bundle\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.230373 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.249924 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.253992 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-serving-cert\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.270506 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.289736 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.303166 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-oauth-config\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.311887 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.319867 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-config\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.333198 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.337203 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-service-ca\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.358426 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.359699 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-trusted-ca-bundle\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.370948 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.376981 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-oauth-serving-cert\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.422363 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsgw\" (UniqueName: \"kubernetes.io/projected/03fe0d37-55e7-485b-9ac2-b0289b860a8a-kube-api-access-vjsgw\") pod \"dns-operator-744455d44c-27w4f\" (UID: \"03fe0d37-55e7-485b-9ac2-b0289b860a8a\") " pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.427906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fll9q\" (UniqueName: \"kubernetes.io/projected/f3e6c2ae-d03b-420b-9272-cfbdc82a78e1-kube-api-access-fll9q\") pod \"apiserver-76f77b778f-k7hhc\" (UID: \"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1\") " pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.450523 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.456495 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258ps\" (UniqueName: \"kubernetes.io/projected/9b594229-eae0-42db-b340-0f23b9158cda-kube-api-access-258ps\") pod \"openshift-apiserver-operator-796bbdcf4f-dq767\" (UID: \"9b594229-eae0-42db-b340-0f23b9158cda\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.485056 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brppz\" (UniqueName: \"kubernetes.io/projected/67dcbf0b-40be-4fae-967b-d049b796d2f5-kube-api-access-brppz\") pod \"controller-manager-879f6c89f-9txkp\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.504678 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pr76\" (UniqueName: \"kubernetes.io/projected/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-kube-api-access-4pr76\") pod \"route-controller-manager-6576b87f9c-h9n94\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.525207 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrlt4\" (UniqueName: \"kubernetes.io/projected/0c481ff0-d16c-4791-a274-d17c7269f430-kube-api-access-nrlt4\") pod \"authentication-operator-69f744f599-xchrj\" (UID: \"0c481ff0-d16c-4791-a274-d17c7269f430\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.530622 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.541087 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6d0c3c9b-2563-4887-a65e-d4777b64ad81-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-46gfl\" (UID: \"6d0c3c9b-2563-4887-a65e-d4777b64ad81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.550906 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.570187 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.591613 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.602895 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.608187 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.610953 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.617730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.630357 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.645567 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.674618 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnzvs\" (UniqueName: \"kubernetes.io/projected/9afb1c84-e82c-43ec-9104-03fba7d404ef-kube-api-access-dnzvs\") pod \"apiserver-7bbb656c7d-jr9lw\" (UID: \"9afb1c84-e82c-43ec-9104-03fba7d404ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.676259 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.692078 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.692441 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llvkl\" (UniqueName: \"kubernetes.io/projected/8a63c533-3225-429d-b1d0-d3c8592a71f1-kube-api-access-llvkl\") pod \"machine-api-operator-5694c8668f-lbzqr\" (UID: \"8a63c533-3225-429d-b1d0-d3c8592a71f1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.696403 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.706795 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.720024 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.730969 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.753355 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.757582 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.771040 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.775411 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c59f4aff-f25f-4882-92fa-3f033eb9b614-proxy-tls\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.790710 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.791536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-srv-cert\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.805564 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.813399 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.831168 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.853377 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.871440 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.891578 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.892397 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9txkp"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.913177 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.931781 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k7hhc"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.934677 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.942668 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.950254 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: W0218 19:20:47.954277 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3e6c2ae_d03b_420b_9272_cfbdc82a78e1.slice/crio-1ac0ba1cb523c0c28ac90abf3ba1dba692eb7c07118bb8037270d1bdcbcdb23a WatchSource:0}: Error finding container 1ac0ba1cb523c0c28ac90abf3ba1dba692eb7c07118bb8037270d1bdcbcdb23a: Status 404 returned error can't find the container with id 1ac0ba1cb523c0c28ac90abf3ba1dba692eb7c07118bb8037270d1bdcbcdb23a Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.959286 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94"] Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.961133 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.970304 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.990838 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:20:47 crc kubenswrapper[4754]: W0218 19:20:47.991344 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd130569b_03ff_4e1b_8cd4_cf0eca8ff4e6.slice/crio-2a676fe8cb20b5033b5cc639b17a1309d4d3c9dd862ee6cae62d41f9a58df9a2 WatchSource:0}: Error finding container 2a676fe8cb20b5033b5cc639b17a1309d4d3c9dd862ee6cae62d41f9a58df9a2: Status 404 returned error can't find the container with id 2a676fe8cb20b5033b5cc639b17a1309d4d3c9dd862ee6cae62d41f9a58df9a2 Feb 18 19:20:47 crc kubenswrapper[4754]: I0218 19:20:47.999088 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" event={"ID":"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1","Type":"ContainerStarted","Data":"1ac0ba1cb523c0c28ac90abf3ba1dba692eb7c07118bb8037270d1bdcbcdb23a"} Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.003707 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" event={"ID":"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6","Type":"ContainerStarted","Data":"2a676fe8cb20b5033b5cc639b17a1309d4d3c9dd862ee6cae62d41f9a58df9a2"} Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.004718 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" event={"ID":"67dcbf0b-40be-4fae-967b-d049b796d2f5","Type":"ContainerStarted","Data":"4e0650903b8732d56ae8d25ee102380382a60762fbecc1ea28ee40a3b284e085"} Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.008412 4754 request.go:700] Waited for 1.01104353s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.010281 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.030656 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.054915 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.062915 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.070664 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.091845 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.111123 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.130122 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.163981 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.165158 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lbzqr"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.173520 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.191662 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.192551 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.211650 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.221566 4754 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.221676 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-metrics-certs podName:996b25cf-442f-475a-93aa-3957be55d4f5 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:48.721651336 +0000 UTC m=+151.172064132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-metrics-certs") pod "router-default-5444994796-72j8q" (UID: "996b25cf-442f-475a-93aa-3957be55d4f5") : failed to sync secret cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.221929 4754 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.224415 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/996b25cf-442f-475a-93aa-3957be55d4f5-service-ca-bundle podName:996b25cf-442f-475a-93aa-3957be55d4f5 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:48.724391163 +0000 UTC m=+151.174803959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/996b25cf-442f-475a-93aa-3957be55d4f5-service-ca-bundle") pod "router-default-5444994796-72j8q" (UID: "996b25cf-442f-475a-93aa-3957be55d4f5") : failed to sync configmap cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.223044 4754 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.224590 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-default-certificate podName:996b25cf-442f-475a-93aa-3957be55d4f5 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:48.724578068 +0000 UTC m=+151.174990864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-default-certificate") pod "router-default-5444994796-72j8q" (UID: "996b25cf-442f-475a-93aa-3957be55d4f5") : failed to sync secret cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.223269 4754 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: E0218 19:20:48.224628 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-stats-auth podName:996b25cf-442f-475a-93aa-3957be55d4f5 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:48.724622349 +0000 UTC m=+151.175035145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-stats-auth") pod "router-default-5444994796-72j8q" (UID: "996b25cf-442f-475a-93aa-3957be55d4f5") : failed to sync secret cache: timed out waiting for the condition Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.238427 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.238522 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xchrj"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.246096 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-27w4f"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.249766 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:20:48 crc kubenswrapper[4754]: W0218 19:20:48.259301 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03fe0d37_55e7_485b_9ac2_b0289b860a8a.slice/crio-20f6ef084a228ed6eac69d1a6e1842d56b70205f06d1945b87d589408f463712 WatchSource:0}: Error finding container 20f6ef084a228ed6eac69d1a6e1842d56b70205f06d1945b87d589408f463712: Status 404 returned error can't find the container with id 20f6ef084a228ed6eac69d1a6e1842d56b70205f06d1945b87d589408f463712 Feb 18 19:20:48 crc kubenswrapper[4754]: W0218 19:20:48.260647 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c481ff0_d16c_4791_a274_d17c7269f430.slice/crio-cae89d329e3e210b5520b6e0858bf03522776d4866f4af2d43744ebdc8b70c66 WatchSource:0}: Error finding container cae89d329e3e210b5520b6e0858bf03522776d4866f4af2d43744ebdc8b70c66: Status 404 returned error can't find the container with id cae89d329e3e210b5520b6e0858bf03522776d4866f4af2d43744ebdc8b70c66 Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.269789 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.290873 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.312707 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.330032 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.351056 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.371113 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.412011 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64s5d\" (UniqueName: \"kubernetes.io/projected/bf0d5d00-668b-495d-8433-03a0b9853804-kube-api-access-64s5d\") pod \"migrator-59844c95c7-srtrk\" (UID: \"bf0d5d00-668b-495d-8433-03a0b9853804\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.426289 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grprw\" (UniqueName: \"kubernetes.io/projected/34a2348f-c1db-4d94-9943-d616526bd03b-kube-api-access-grprw\") pod \"control-plane-machine-set-operator-78cbb6b69f-jcngv\" (UID: \"34a2348f-c1db-4d94-9943-d616526bd03b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.446318 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwpf\" (UniqueName: \"kubernetes.io/projected/6876f05d-5d39-418b-8697-4dfbd5600c92-kube-api-access-2nwpf\") pod \"etcd-operator-b45778765-jk4zv\" (UID: \"6876f05d-5d39-418b-8697-4dfbd5600c92\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.464333 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4a5327f-8708-421b-a361-cb948df8f801-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wm2j2\" (UID: \"a4a5327f-8708-421b-a361-cb948df8f801\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.470215 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.490505 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.497506 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.507405 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.511031 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.531810 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.551098 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.598800 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khqnk\" (UniqueName: \"kubernetes.io/projected/a3dad887-5ca2-42ea-8e86-357ee76b0b51-kube-api-access-khqnk\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.604788 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3dad887-5ca2-42ea-8e86-357ee76b0b51-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nqhdr\" (UID: \"a3dad887-5ca2-42ea-8e86-357ee76b0b51\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.613687 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.632222 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.638517 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.652184 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.670976 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.686746 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.692119 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.698176 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.710594 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.727295 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2"] Feb 18 19:20:48 crc kubenswrapper[4754]: W0218 19:20:48.730192 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0d5d00_668b_495d_8433_03a0b9853804.slice/crio-1e953f9e6cbd2db874599ab03e1a8f90c0c3241c095500db71089341d0745bd7 WatchSource:0}: Error finding container 1e953f9e6cbd2db874599ab03e1a8f90c0c3241c095500db71089341d0745bd7: Status 404 returned error can't find the container with id 1e953f9e6cbd2db874599ab03e1a8f90c0c3241c095500db71089341d0745bd7 Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.730399 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.744559 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-metrics-certs\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.744658 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-default-certificate\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.744767 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996b25cf-442f-475a-93aa-3957be55d4f5-service-ca-bundle\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.744809 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-stats-auth\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.746009 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/996b25cf-442f-475a-93aa-3957be55d4f5-service-ca-bundle\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.750414 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-metrics-certs\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.751986 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-stats-auth\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.753667 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/996b25cf-442f-475a-93aa-3957be55d4f5-default-certificate\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.766715 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.768772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rskbd\" (UniqueName: \"kubernetes.io/projected/a61192e1-b3ed-4fc0-80fc-499fe120edb4-kube-api-access-rskbd\") pod \"machine-approver-56656f9798-smgx9\" (UID: \"a61192e1-b3ed-4fc0-80fc-499fe120edb4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.788728 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72jwg\" (UniqueName: \"kubernetes.io/projected/19ba4f7e-bcca-4d8a-99f2-77e00a2eb255-kube-api-access-72jwg\") pod \"cluster-samples-operator-665b6dd947-72sjp\" (UID: \"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.793200 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.815245 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.830664 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.849636 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jk4zv"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.851915 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.872476 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.894361 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.907225 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv"] Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.911320 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.943232 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.955673 4754 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.971373 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.971561 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" Feb 18 19:20:48 crc kubenswrapper[4754]: I0218 19:20:48.991939 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.000760 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr"] Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.008427 4754 request.go:700] Waited for 1.923169226s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.010885 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:20:49 crc kubenswrapper[4754]: W0218 19:20:49.018363 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda61192e1_b3ed_4fc0_80fc_499fe120edb4.slice/crio-fdeef1bce4f02c4142abc30210f3da0ad467de13130178f4adf9e7306cf7edba WatchSource:0}: Error finding container fdeef1bce4f02c4142abc30210f3da0ad467de13130178f4adf9e7306cf7edba: Status 404 returned error can't find the container with id fdeef1bce4f02c4142abc30210f3da0ad467de13130178f4adf9e7306cf7edba Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.031159 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.037989 4754 generic.go:334] "Generic (PLEG): container finished" podID="f3e6c2ae-d03b-420b-9272-cfbdc82a78e1" containerID="32811d60339e093b701b047c6a1b132d54bf8ac9df155dfe37b4e09f77359b5c" exitCode=0 Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.038213 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" event={"ID":"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1","Type":"ContainerDied","Data":"32811d60339e093b701b047c6a1b132d54bf8ac9df155dfe37b4e09f77359b5c"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.043571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" event={"ID":"bf0d5d00-668b-495d-8433-03a0b9853804","Type":"ContainerStarted","Data":"781e091fec24629c37eaf4e1af8e6c25f176d81a233c5cd5295e266e989bc288"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.043625 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" event={"ID":"bf0d5d00-668b-495d-8433-03a0b9853804","Type":"ContainerStarted","Data":"1e953f9e6cbd2db874599ab03e1a8f90c0c3241c095500db71089341d0745bd7"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.049456 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" event={"ID":"67dcbf0b-40be-4fae-967b-d049b796d2f5","Type":"ContainerStarted","Data":"a9fcb8505265517db86c99fc7b02ade25f69f0d4fda036642d820c71f50cf7b2"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.049648 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.050822 4754 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9txkp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.050830 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.050871 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" podUID="67dcbf0b-40be-4fae-967b-d049b796d2f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.052289 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" event={"ID":"8a63c533-3225-429d-b1d0-d3c8592a71f1","Type":"ContainerStarted","Data":"8a52d2fc8bfd3c667005581ffbb93cf05d357c8dedac5473adc3e9893003299f"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.052325 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" event={"ID":"8a63c533-3225-429d-b1d0-d3c8592a71f1","Type":"ContainerStarted","Data":"94f531e8a7f6ce0dc01410a438efeb704f454976034f7f316eddc399c50a5386"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.052338 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" event={"ID":"8a63c533-3225-429d-b1d0-d3c8592a71f1","Type":"ContainerStarted","Data":"0e3b2dc090c5ef2f1e2e793e8eac483a32393b5d61cbf99a82d9648f19e3a652"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.055207 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" event={"ID":"0c481ff0-d16c-4791-a274-d17c7269f430","Type":"ContainerStarted","Data":"65f98f7cf5f42a6c95f05be4a94f0323dc33b9619657933bb855dcc3f39b0cbe"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.055233 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" event={"ID":"0c481ff0-d16c-4791-a274-d17c7269f430","Type":"ContainerStarted","Data":"cae89d329e3e210b5520b6e0858bf03522776d4866f4af2d43744ebdc8b70c66"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.062532 4754 generic.go:334] "Generic (PLEG): container finished" podID="9afb1c84-e82c-43ec-9104-03fba7d404ef" containerID="4e29d29d748efc5bfb0f211654bb64775acc760d33c566637ebf5c04083efa30" exitCode=0 Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.062666 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" event={"ID":"9afb1c84-e82c-43ec-9104-03fba7d404ef","Type":"ContainerDied","Data":"4e29d29d748efc5bfb0f211654bb64775acc760d33c566637ebf5c04083efa30"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.063035 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" event={"ID":"9afb1c84-e82c-43ec-9104-03fba7d404ef","Type":"ContainerStarted","Data":"43bd518c25000442edec49466094403d43d41001f4b6b99b3e5cca07263b9ada"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.077362 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" event={"ID":"03fe0d37-55e7-485b-9ac2-b0289b860a8a","Type":"ContainerStarted","Data":"fbe9be690f839791df0c30a887267e9781a71a8e0c883c5ad5b732d3b0e268aa"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.077408 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" event={"ID":"03fe0d37-55e7-485b-9ac2-b0289b860a8a","Type":"ContainerStarted","Data":"6a7bf3d5aefc2b26ba0c0c4e78cd8a82c0525d89a708144d36f9b44967023cf5"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.077417 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" event={"ID":"03fe0d37-55e7-485b-9ac2-b0289b860a8a","Type":"ContainerStarted","Data":"20f6ef084a228ed6eac69d1a6e1842d56b70205f06d1945b87d589408f463712"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.087692 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" event={"ID":"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6","Type":"ContainerStarted","Data":"c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.089029 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.143233 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" event={"ID":"a4a5327f-8708-421b-a361-cb948df8f801","Type":"ContainerStarted","Data":"bd786b9b06161ca60f10779bb35de27de33956060a49f8968cd3e7a9ffcf2630"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.146127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" event={"ID":"34a2348f-c1db-4d94-9943-d616526bd03b","Type":"ContainerStarted","Data":"62ccc40c5ef466609edfe89c9cb4d07099f12bb832acca8a4e67341c74de1ba9"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.149365 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" event={"ID":"9b594229-eae0-42db-b340-0f23b9158cda","Type":"ContainerStarted","Data":"0eee3b1168f1650f44312a91a08707d19310d25f1a6a286a8e627cebf49bf15b"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.149422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" event={"ID":"9b594229-eae0-42db-b340-0f23b9158cda","Type":"ContainerStarted","Data":"429d0e69d96dace358fb08463999df7c6ab5a964dc57153aba4542cd3bf96c19"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.156644 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xtrx\" (UniqueName: \"kubernetes.io/projected/5fe98aac-9aed-4963-a4a9-eeaa65a11720-kube-api-access-7xtrx\") pod \"console-operator-58897d9998-bs2kd\" (UID: \"5fe98aac-9aed-4963-a4a9-eeaa65a11720\") " pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.156738 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7jp4\" (UniqueName: \"kubernetes.io/projected/9e74a6ce-6ddf-437a-8b5b-4587c90df3f5-kube-api-access-q7jp4\") pod \"openshift-config-operator-7777fb866f-ltzt5\" (UID: \"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.161026 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" event={"ID":"6876f05d-5d39-418b-8697-4dfbd5600c92","Type":"ContainerStarted","Data":"0809ebf62f59be5a45d7de94197cf5ff1caec00e022a8f85b7131218c6c20bee"} Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.167281 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79q5\" (UniqueName: \"kubernetes.io/projected/e4ab90eb-251c-4e6b-965f-fbda2779d2ee-kube-api-access-z79q5\") pod \"kube-storage-version-migrator-operator-b67b599dd-j45bm\" (UID: \"e4ab90eb-251c-4e6b-965f-fbda2779d2ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.173148 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.187992 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.193037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b047a1-61d6-4237-93a2-82047effa98a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vphtb\" (UID: \"26b047a1-61d6-4237-93a2-82047effa98a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.204885 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l4jw\" (UniqueName: \"kubernetes.io/projected/c59f4aff-f25f-4882-92fa-3f033eb9b614-kube-api-access-7l4jw\") pod \"machine-config-controller-84d6567774-rht4g\" (UID: \"c59f4aff-f25f-4882-92fa-3f033eb9b614\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.207275 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.219320 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fssp\" (UniqueName: \"kubernetes.io/projected/6d0c3c9b-2563-4887-a65e-d4777b64ad81-kube-api-access-7fssp\") pod \"multus-admission-controller-857f4d67dd-46gfl\" (UID: \"6d0c3c9b-2563-4887-a65e-d4777b64ad81\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.226184 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp"] Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.232984 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhws\" (UniqueName: \"kubernetes.io/projected/5b43afdc-1af8-457c-9218-d416a0bdadc3-kube-api-access-lkhws\") pod \"machine-config-operator-74547568cd-znncb\" (UID: \"5b43afdc-1af8-457c-9218-d416a0bdadc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.252493 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn7vt\" (UniqueName: \"kubernetes.io/projected/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-kube-api-access-dn7vt\") pod \"oauth-openshift-558db77b4-lt44t\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.272687 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsg26\" (UniqueName: \"kubernetes.io/projected/ffc59d20-7b90-4a2e-bb61-feed9fb458e4-kube-api-access-fsg26\") pod \"catalog-operator-68c6474976-zn85c\" (UID: \"ffc59d20-7b90-4a2e-bb61-feed9fb458e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.292561 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bzc5\" (UniqueName: \"kubernetes.io/projected/59f7de33-73f2-480a-bc50-42be734c1764-kube-api-access-2bzc5\") pod \"openshift-controller-manager-operator-756b6f6bc6-gmzvv\" (UID: \"59f7de33-73f2-480a-bc50-42be734c1764\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.316637 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv6rj\" (UniqueName: \"kubernetes.io/projected/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-kube-api-access-kv6rj\") pod \"console-f9d7485db-72dh6\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.328154 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2l8c\" (UniqueName: \"kubernetes.io/projected/996b25cf-442f-475a-93aa-3957be55d4f5-kube-api-access-w2l8c\") pod \"router-default-5444994796-72j8q\" (UID: \"996b25cf-442f-475a-93aa-3957be55d4f5\") " pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.351898 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.413369 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.426794 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.436399 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.443482 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.454777 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455264 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52eb66a8-dd7d-4688-a370-f9ad22a055e4-apiservice-cert\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455311 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a973e35e-58b0-4402-9dc5-9a30d414cc06-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455383 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d93df2f-dc09-41fb-845b-4d6f73a21c40-bound-sa-token\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a973e35e-58b0-4402-9dc5-9a30d414cc06-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455488 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac3a0bc-736e-409b-978b-ba6f86b17ff0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zg7hz\" (UID: \"3ac3a0bc-736e-409b-978b-ba6f86b17ff0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455524 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jb8\" (UniqueName: \"kubernetes.io/projected/3ac3a0bc-736e-409b-978b-ba6f86b17ff0-kube-api-access-t6jb8\") pod \"package-server-manager-789f6589d5-zg7hz\" (UID: \"3ac3a0bc-736e-409b-978b-ba6f86b17ff0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455539 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rxv\" (UniqueName: \"kubernetes.io/projected/21556514-9470-431c-b12f-619e2ff69531-kube-api-access-d9rxv\") pod \"ingress-canary-z5tf7\" (UID: \"21556514-9470-431c-b12f-619e2ff69531\") " pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455557 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s5sf\" (UniqueName: \"kubernetes.io/projected/c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777-kube-api-access-9s5sf\") pod \"downloads-7954f5f757-nszgz\" (UID: \"c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777\") " pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455605 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-mountpoint-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455623 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d93df2f-dc09-41fb-845b-4d6f73a21c40-trusted-ca\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455638 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f35420d7-13f8-4e0c-890f-fdaf97277ec3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455654 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93c4fc30-a89f-4c7e-ac80-10b41321f818-metrics-tls\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455667 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52eb66a8-dd7d-4688-a370-f9ad22a055e4-webhook-cert\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455738 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/74edc75d-25dd-4951-828b-9b86187007a5-certs\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455754 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5079a199-58fd-45a6-8227-96b4dad59a01-serving-cert\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455809 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-csi-data-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455835 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-registry-certificates\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455889 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/74edc75d-25dd-4951-828b-9b86187007a5-node-bootstrap-token\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455922 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cmx\" (UniqueName: \"kubernetes.io/projected/5079a199-58fd-45a6-8227-96b4dad59a01-kube-api-access-s9cmx\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455956 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/52eb66a8-dd7d-4688-a370-f9ad22a055e4-tmpfs\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.455970 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4kd\" (UniqueName: \"kubernetes.io/projected/74edc75d-25dd-4951-828b-9b86187007a5-kube-api-access-sg4kd\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456007 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s44sf\" (UniqueName: \"kubernetes.io/projected/8d93df2f-dc09-41fb-845b-4d6f73a21c40-kube-api-access-s44sf\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456024 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35420d7-13f8-4e0c-890f-fdaf97277ec3-srv-cert\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwcvf\" (UniqueName: \"kubernetes.io/projected/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-kube-api-access-qwcvf\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456082 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-registration-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456164 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-registry-tls\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456231 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqc8b\" (UniqueName: \"kubernetes.io/projected/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-kube-api-access-tqc8b\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456274 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d93df2f-dc09-41fb-845b-4d6f73a21c40-metrics-tls\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456293 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-secret-volume\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456356 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a00863d2-1742-42b7-a47e-beef12e21834-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456375 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qqvj\" (UniqueName: \"kubernetes.io/projected/bb7de674-4481-4e2c-9cd7-95dd4fe12307-kube-api-access-6qqvj\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456464 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-socket-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-config-volume\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456507 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a00863d2-1742-42b7-a47e-beef12e21834-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456544 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-plugins-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456580 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-bound-sa-token\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456611 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21556514-9470-431c-b12f-619e2ff69531-cert\") pod \"ingress-canary-z5tf7\" (UID: \"21556514-9470-431c-b12f-619e2ff69531\") " pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tdp\" (UniqueName: \"kubernetes.io/projected/f35420d7-13f8-4e0c-890f-fdaf97277ec3-kube-api-access-p5tdp\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456659 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns67r\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-kube-api-access-ns67r\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456675 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456707 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456762 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzld\" (UniqueName: \"kubernetes.io/projected/93c4fc30-a89f-4c7e-ac80-10b41321f818-kube-api-access-5qzld\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456834 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-trusted-ca\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456854 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5079a199-58fd-45a6-8227-96b4dad59a01-config\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456871 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a973e35e-58b0-4402-9dc5-9a30d414cc06-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456888 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x74ms\" (UniqueName: \"kubernetes.io/projected/52eb66a8-dd7d-4688-a370-f9ad22a055e4-kube-api-access-x74ms\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456943 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/81d8d7f7-21a1-40a4-ba01-d54c91406f08-kube-api-access-ddkmp\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456960 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bb7de674-4481-4e2c-9cd7-95dd4fe12307-signing-key\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.456984 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c4fc30-a89f-4c7e-ac80-10b41321f818-config-volume\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.457008 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bb7de674-4481-4e2c-9cd7-95dd4fe12307-signing-cabundle\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.469493 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:49.969477638 +0000 UTC m=+152.419890434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.478845 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.503190 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.516188 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.544036 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5"] Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558082 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.558385 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.05835395 +0000 UTC m=+152.508766746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558478 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a973e35e-58b0-4402-9dc5-9a30d414cc06-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558564 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac3a0bc-736e-409b-978b-ba6f86b17ff0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zg7hz\" (UID: \"3ac3a0bc-736e-409b-978b-ba6f86b17ff0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558641 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jb8\" (UniqueName: \"kubernetes.io/projected/3ac3a0bc-736e-409b-978b-ba6f86b17ff0-kube-api-access-t6jb8\") pod \"package-server-manager-789f6589d5-zg7hz\" (UID: \"3ac3a0bc-736e-409b-978b-ba6f86b17ff0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558722 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9rxv\" (UniqueName: \"kubernetes.io/projected/21556514-9470-431c-b12f-619e2ff69531-kube-api-access-d9rxv\") pod \"ingress-canary-z5tf7\" (UID: \"21556514-9470-431c-b12f-619e2ff69531\") " pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558795 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s5sf\" (UniqueName: \"kubernetes.io/projected/c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777-kube-api-access-9s5sf\") pod \"downloads-7954f5f757-nszgz\" (UID: \"c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777\") " pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558885 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-mountpoint-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.558959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d93df2f-dc09-41fb-845b-4d6f73a21c40-trusted-ca\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559027 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f35420d7-13f8-4e0c-890f-fdaf97277ec3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559217 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93c4fc30-a89f-4c7e-ac80-10b41321f818-metrics-tls\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559294 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52eb66a8-dd7d-4688-a370-f9ad22a055e4-webhook-cert\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559372 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/74edc75d-25dd-4951-828b-9b86187007a5-certs\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559446 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5079a199-58fd-45a6-8227-96b4dad59a01-serving-cert\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559526 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-csi-data-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559604 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-registry-certificates\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559673 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/74edc75d-25dd-4951-828b-9b86187007a5-node-bootstrap-token\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559747 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/52eb66a8-dd7d-4688-a370-f9ad22a055e4-tmpfs\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559883 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9cmx\" (UniqueName: \"kubernetes.io/projected/5079a199-58fd-45a6-8227-96b4dad59a01-kube-api-access-s9cmx\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.559953 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s44sf\" (UniqueName: \"kubernetes.io/projected/8d93df2f-dc09-41fb-845b-4d6f73a21c40-kube-api-access-s44sf\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560020 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4kd\" (UniqueName: \"kubernetes.io/projected/74edc75d-25dd-4951-828b-9b86187007a5-kube-api-access-sg4kd\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560085 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35420d7-13f8-4e0c-890f-fdaf97277ec3-srv-cert\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwcvf\" (UniqueName: \"kubernetes.io/projected/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-kube-api-access-qwcvf\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560254 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-registration-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560335 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-registry-tls\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560485 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqc8b\" (UniqueName: \"kubernetes.io/projected/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-kube-api-access-tqc8b\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560564 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560649 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d93df2f-dc09-41fb-845b-4d6f73a21c40-metrics-tls\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560722 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-secret-volume\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560792 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a00863d2-1742-42b7-a47e-beef12e21834-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.577031 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qqvj\" (UniqueName: \"kubernetes.io/projected/bb7de674-4481-4e2c-9cd7-95dd4fe12307-kube-api-access-6qqvj\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.579848 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-socket-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.579986 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-config-volume\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580064 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a00863d2-1742-42b7-a47e-beef12e21834-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-plugins-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-bound-sa-token\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580178 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns67r\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-kube-api-access-ns67r\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580205 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580232 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21556514-9470-431c-b12f-619e2ff69531-cert\") pod \"ingress-canary-z5tf7\" (UID: \"21556514-9470-431c-b12f-619e2ff69531\") " pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580253 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tdp\" (UniqueName: \"kubernetes.io/projected/f35420d7-13f8-4e0c-890f-fdaf97277ec3-kube-api-access-p5tdp\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580311 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580343 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzld\" (UniqueName: \"kubernetes.io/projected/93c4fc30-a89f-4c7e-ac80-10b41321f818-kube-api-access-5qzld\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580374 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-trusted-ca\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580393 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5079a199-58fd-45a6-8227-96b4dad59a01-config\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580414 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a973e35e-58b0-4402-9dc5-9a30d414cc06-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580450 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x74ms\" (UniqueName: \"kubernetes.io/projected/52eb66a8-dd7d-4688-a370-f9ad22a055e4-kube-api-access-x74ms\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580483 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c4fc30-a89f-4c7e-ac80-10b41321f818-config-volume\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580505 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/81d8d7f7-21a1-40a4-ba01-d54c91406f08-kube-api-access-ddkmp\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580526 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bb7de674-4481-4e2c-9cd7-95dd4fe12307-signing-key\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580551 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bb7de674-4481-4e2c-9cd7-95dd4fe12307-signing-cabundle\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580592 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a973e35e-58b0-4402-9dc5-9a30d414cc06-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580612 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52eb66a8-dd7d-4688-a370-f9ad22a055e4-apiservice-cert\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.580638 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d93df2f-dc09-41fb-845b-4d6f73a21c40-bound-sa-token\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.569261 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d93df2f-dc09-41fb-845b-4d6f73a21c40-trusted-ca\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.572871 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-registration-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560573 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-csi-data-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.576671 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.577862 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/74edc75d-25dd-4951-828b-9b86187007a5-certs\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.579265 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-registry-certificates\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.581007 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/52eb66a8-dd7d-4688-a370-f9ad22a055e4-tmpfs\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.581667 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-socket-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.582007 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a973e35e-58b0-4402-9dc5-9a30d414cc06-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.560903 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-mountpoint-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.588622 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a00863d2-1742-42b7-a47e-beef12e21834-ca-trust-extracted\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.588704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/81d8d7f7-21a1-40a4-ba01-d54c91406f08-plugins-dir\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.588795 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.088775454 +0000 UTC m=+152.539188250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.589193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5079a199-58fd-45a6-8227-96b4dad59a01-config\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.589500 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-config-volume\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.590730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bb7de674-4481-4e2c-9cd7-95dd4fe12307-signing-cabundle\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.592360 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-trusted-ca\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.593504 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93c4fc30-a89f-4c7e-ac80-10b41321f818-config-volume\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.615260 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52eb66a8-dd7d-4688-a370-f9ad22a055e4-webhook-cert\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.629563 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93c4fc30-a89f-4c7e-ac80-10b41321f818-metrics-tls\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.630360 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-secret-volume\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.631049 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f35420d7-13f8-4e0c-890f-fdaf97277ec3-srv-cert\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.632561 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-registry-tls\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.635156 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s5sf\" (UniqueName: \"kubernetes.io/projected/c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777-kube-api-access-9s5sf\") pod \"downloads-7954f5f757-nszgz\" (UID: \"c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777\") " pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.636340 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac3a0bc-736e-409b-978b-ba6f86b17ff0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zg7hz\" (UID: \"3ac3a0bc-736e-409b-978b-ba6f86b17ff0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.643630 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21556514-9470-431c-b12f-619e2ff69531-cert\") pod \"ingress-canary-z5tf7\" (UID: \"21556514-9470-431c-b12f-619e2ff69531\") " pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.643654 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a00863d2-1742-42b7-a47e-beef12e21834-installation-pull-secrets\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.643987 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f35420d7-13f8-4e0c-890f-fdaf97277ec3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.647211 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8d93df2f-dc09-41fb-845b-4d6f73a21c40-metrics-tls\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.657739 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/74edc75d-25dd-4951-828b-9b86187007a5-node-bootstrap-token\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.661060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.662858 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bb7de674-4481-4e2c-9cd7-95dd4fe12307-signing-key\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.663852 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwcvf\" (UniqueName: \"kubernetes.io/projected/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-kube-api-access-qwcvf\") pod \"collect-profiles-29524035-8jz7s\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.671824 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.672676 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a973e35e-58b0-4402-9dc5-9a30d414cc06-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.673906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52eb66a8-dd7d-4688-a370-f9ad22a055e4-apiservice-cert\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.675007 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5079a199-58fd-45a6-8227-96b4dad59a01-serving-cert\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.675168 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9rxv\" (UniqueName: \"kubernetes.io/projected/21556514-9470-431c-b12f-619e2ff69531-kube-api-access-d9rxv\") pod \"ingress-canary-z5tf7\" (UID: \"21556514-9470-431c-b12f-619e2ff69531\") " pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.678825 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jb8\" (UniqueName: \"kubernetes.io/projected/3ac3a0bc-736e-409b-978b-ba6f86b17ff0-kube-api-access-t6jb8\") pod \"package-server-manager-789f6589d5-zg7hz\" (UID: \"3ac3a0bc-736e-409b-978b-ba6f86b17ff0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.679646 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqc8b\" (UniqueName: \"kubernetes.io/projected/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-kube-api-access-tqc8b\") pod \"marketplace-operator-79b997595-wljc4\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.681740 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.682736 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g"] Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.687237 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.187215395 +0000 UTC m=+152.637628191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.687315 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.687627 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.187619337 +0000 UTC m=+152.638032133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.719248 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm"] Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.735474 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qqvj\" (UniqueName: \"kubernetes.io/projected/bb7de674-4481-4e2c-9cd7-95dd4fe12307-kube-api-access-6qqvj\") pod \"service-ca-9c57cc56f-chrt8\" (UID: \"bb7de674-4481-4e2c-9cd7-95dd4fe12307\") " pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.737662 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9cmx\" (UniqueName: \"kubernetes.io/projected/5079a199-58fd-45a6-8227-96b4dad59a01-kube-api-access-s9cmx\") pod \"service-ca-operator-777779d784-5m74r\" (UID: \"5079a199-58fd-45a6-8227-96b4dad59a01\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.769104 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s44sf\" (UniqueName: \"kubernetes.io/projected/8d93df2f-dc09-41fb-845b-4d6f73a21c40-kube-api-access-s44sf\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.788455 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.789677 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.289638299 +0000 UTC m=+152.740051095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.790054 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4kd\" (UniqueName: \"kubernetes.io/projected/74edc75d-25dd-4951-828b-9b86187007a5-kube-api-access-sg4kd\") pod \"machine-config-server-7n89w\" (UID: \"74edc75d-25dd-4951-828b-9b86187007a5\") " pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.793059 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.793651 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.29363604 +0000 UTC m=+152.744048836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.796615 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d93df2f-dc09-41fb-845b-4d6f73a21c40-bound-sa-token\") pod \"ingress-operator-5b745b69d9-95bqg\" (UID: \"8d93df2f-dc09-41fb-845b-4d6f73a21c40\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.821969 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tdp\" (UniqueName: \"kubernetes.io/projected/f35420d7-13f8-4e0c-890f-fdaf97277ec3-kube-api-access-p5tdp\") pod \"olm-operator-6b444d44fb-xpsvh\" (UID: \"f35420d7-13f8-4e0c-890f-fdaf97277ec3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.827716 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.845080 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.854220 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzld\" (UniqueName: \"kubernetes.io/projected/93c4fc30-a89f-4c7e-ac80-10b41321f818-kube-api-access-5qzld\") pod \"dns-default-2hzgm\" (UID: \"93c4fc30-a89f-4c7e-ac80-10b41321f818\") " pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.857892 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.867913 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.893482 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb"] Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.894092 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:49 crc kubenswrapper[4754]: E0218 19:20:49.894817 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.394757557 +0000 UTC m=+152.845170353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.895437 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-bound-sa-token\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.899906 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns67r\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-kube-api-access-ns67r\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.922060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/81d8d7f7-21a1-40a4-ba01-d54c91406f08-kube-api-access-ddkmp\") pod \"csi-hostpathplugin-6vw4g\" (UID: \"81d8d7f7-21a1-40a4-ba01-d54c91406f08\") " pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.958043 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x74ms\" (UniqueName: \"kubernetes.io/projected/52eb66a8-dd7d-4688-a370-f9ad22a055e4-kube-api-access-x74ms\") pod \"packageserver-d55dfcdfc-x8f86\" (UID: \"52eb66a8-dd7d-4688-a370-f9ad22a055e4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.960461 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.960917 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.961486 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.961566 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.971240 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z5tf7" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.985729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a973e35e-58b0-4402-9dc5-9a30d414cc06-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5sp2v\" (UID: \"a973e35e-58b0-4402-9dc5-9a30d414cc06\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:49 crc kubenswrapper[4754]: I0218 19:20:49.988525 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.004647 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.005029 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.005428 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.50541419 +0000 UTC m=+152.955826986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.066462 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7n89w" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.107695 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.107958 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.607941617 +0000 UTC m=+153.058354413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.151898 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.208980 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.209700 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.70968916 +0000 UTC m=+153.160101956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.214511 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.306630 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" event={"ID":"c59f4aff-f25f-4882-92fa-3f033eb9b614","Type":"ContainerStarted","Data":"19c8d0fe066b9fcabf5829a355e41c703b03f81c75fa4a6f1e7edbed548da750"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.310280 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.310469 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.810442597 +0000 UTC m=+153.260855393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.310642 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.311007 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.811000102 +0000 UTC m=+153.261412898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.323002 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" event={"ID":"a4a5327f-8708-421b-a361-cb948df8f801","Type":"ContainerStarted","Data":"ffc9f9d8e96c9ad4fb72fb5e54d7e92ccf0eaedaa07e59e5c28c4918bb1d7ca1"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.340759 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-72j8q" event={"ID":"996b25cf-442f-475a-93aa-3957be55d4f5","Type":"ContainerStarted","Data":"a8dda23452ae2f36929fe227109e5aca061bed318c8c64fcff9b3a611fbdbed9"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.340813 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-72j8q" event={"ID":"996b25cf-442f-475a-93aa-3957be55d4f5","Type":"ContainerStarted","Data":"85ba5ecb336e2a036d886333bd6788b6b41b3946b8d4346c4f0ef756f7aea5ee"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.348005 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" podStartSLOduration=128.3479917 podStartE2EDuration="2m8.3479917s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:50.346688894 +0000 UTC m=+152.797101690" watchObservedRunningTime="2026-02-18 19:20:50.3479917 +0000 UTC m=+152.798404496" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.353638 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.361439 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" event={"ID":"e4ab90eb-251c-4e6b-965f-fbda2779d2ee","Type":"ContainerStarted","Data":"96b48fb51ed449f95fa22ff9602279c30334d7c51c2e4a44ef096eea8d6a5348"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.365052 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" event={"ID":"9afb1c84-e82c-43ec-9104-03fba7d404ef","Type":"ContainerStarted","Data":"d39b1285e407f4fb87447dfcb66d553751643aef98e18e8a17269d76d9fac5c1"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.366448 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" event={"ID":"6876f05d-5d39-418b-8697-4dfbd5600c92","Type":"ContainerStarted","Data":"c75a7911ac85f543f179cd2b71de422d83a5da6c5ee296d9365aa7aabbe12e30"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.369606 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" event={"ID":"a61192e1-b3ed-4fc0-80fc-499fe120edb4","Type":"ContainerStarted","Data":"6e3dc83d7d94be4a38f1429b1ec4ec52c3fea6db7a89b38e5855a76301793047"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.369633 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" event={"ID":"a61192e1-b3ed-4fc0-80fc-499fe120edb4","Type":"ContainerStarted","Data":"fdeef1bce4f02c4142abc30210f3da0ad467de13130178f4adf9e7306cf7edba"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.371031 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" event={"ID":"bf0d5d00-668b-495d-8433-03a0b9853804","Type":"ContainerStarted","Data":"4a7990d0159095897ec938c95190f4ee564674180e73fc51a5b10462281116d1"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.391496 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" event={"ID":"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1","Type":"ContainerStarted","Data":"a1fd15263a2b319bfb68a5a0d048d825e67263b4943f65e0d6415a5fc254e4f5"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.413120 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" event={"ID":"34a2348f-c1db-4d94-9943-d616526bd03b","Type":"ContainerStarted","Data":"4c3553c7e8cf45124cf1a40372b5da9ca2a341b6e3c44e066335782936b73aa3"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.416518 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.418293 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:50.918277322 +0000 UTC m=+153.368690118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.427609 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.427689 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.431784 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" event={"ID":"a3dad887-5ca2-42ea-8e86-357ee76b0b51","Type":"ContainerStarted","Data":"af14821909336384594c63da75db612d306765a75979967804e622ad073e52be"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.431827 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" event={"ID":"a3dad887-5ca2-42ea-8e86-357ee76b0b51","Type":"ContainerStarted","Data":"670a7fb3a4fe510ba498201852f2485c46d61f88057f49f95da2b7fbcc705e6f"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.457404 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" podStartSLOduration=128.457388039 podStartE2EDuration="2m8.457388039s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:50.455239649 +0000 UTC m=+152.905652465" watchObservedRunningTime="2026-02-18 19:20:50.457388039 +0000 UTC m=+152.907800835" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.500054 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv"] Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.512443 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" event={"ID":"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255","Type":"ContainerStarted","Data":"b6d851bce5a5f3db12dfb02cf8f0eacd191bec6d6cc127eaab94327b93c0cf6f"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.512474 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" event={"ID":"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255","Type":"ContainerStarted","Data":"4fbb1ad634c8d68aed147e0c01ee1d33c39aa7b1058258076c46a0ce3f94d0fb"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.512485 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" event={"ID":"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5","Type":"ContainerStarted","Data":"a535dce22007aadcbf39f570c8e7f05c193fb96fae59eed059dfe4e9544dabff"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.517880 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.518281 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.018268346 +0000 UTC m=+153.468681142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.578364 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" event={"ID":"26b047a1-61d6-4237-93a2-82047effa98a","Type":"ContainerStarted","Data":"d706c042531a6150a3fee3e3d9e3771afadd0cf2e1205645f1a44840ff3dece0"} Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.597697 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.614649 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-znncb"] Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.618797 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.621045 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.12102674 +0000 UTC m=+153.571439536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.700219 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-72dh6"] Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.716462 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bs2kd"] Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.721239 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.721689 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.221671003 +0000 UTC m=+153.672083799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.828635 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.829074 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.329058224 +0000 UTC m=+153.779471020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:50 crc kubenswrapper[4754]: I0218 19:20:50.931919 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:50 crc kubenswrapper[4754]: E0218 19:20:50.932852 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.432828715 +0000 UTC m=+153.883241501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.035178 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.035822 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.535801223 +0000 UTC m=+153.986214019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.138882 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.139489 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.639475211 +0000 UTC m=+154.089888007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.144520 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dq767" podStartSLOduration=129.144503713 podStartE2EDuration="2m9.144503713s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.143892936 +0000 UTC m=+153.594305732" watchObservedRunningTime="2026-02-18 19:20:51.144503713 +0000 UTC m=+153.594916509" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.241269 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.241529 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.741515644 +0000 UTC m=+154.191928440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.320035 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-46gfl"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.348735 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.349052 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.84903941 +0000 UTC m=+154.299452206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.349465 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lbzqr" podStartSLOduration=129.349436611 podStartE2EDuration="2m9.349436611s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.348680921 +0000 UTC m=+153.799093717" watchObservedRunningTime="2026-02-18 19:20:51.349436611 +0000 UTC m=+153.799849407" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.435470 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.435900 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.451322 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.451674 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:51.951654309 +0000 UTC m=+154.402067105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.491081 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xchrj" podStartSLOduration=129.491054744 podStartE2EDuration="2m9.491054744s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.489711286 +0000 UTC m=+153.940124072" watchObservedRunningTime="2026-02-18 19:20:51.491054744 +0000 UTC m=+153.941467540" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.541434 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-27w4f" podStartSLOduration=129.541404756 podStartE2EDuration="2m9.541404756s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.537874117 +0000 UTC m=+153.988286913" watchObservedRunningTime="2026-02-18 19:20:51.541404756 +0000 UTC m=+153.991817552" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.557094 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.557643 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.057629741 +0000 UTC m=+154.508042537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.624013 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" event={"ID":"5fe98aac-9aed-4963-a4a9-eeaa65a11720","Type":"ContainerStarted","Data":"f6920ba11153415079a432135bebb120315937500ad9a14a8332556c7e080ec3"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.627044 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" event={"ID":"c59f4aff-f25f-4882-92fa-3f033eb9b614","Type":"ContainerStarted","Data":"b79fcdf9b93349b40544172785809ce3512d6d64b2253d8773461ba4acb44a29"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.646442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7n89w" event={"ID":"74edc75d-25dd-4951-828b-9b86187007a5","Type":"ContainerStarted","Data":"dffa26891e1bbf13fab2951d4de0d4f91c0d6522b19bd3538bb186ebdad087e2"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.649524 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" event={"ID":"6d0c3c9b-2563-4887-a65e-d4777b64ad81","Type":"ContainerStarted","Data":"d7c65074d4ecbcae93db92040895d37c6dce534cdfd39caa450c17899a11d3d9"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.660480 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.660998 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.16095929 +0000 UTC m=+154.611372086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.662473 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" event={"ID":"5b43afdc-1af8-457c-9218-d416a0bdadc3","Type":"ContainerStarted","Data":"303e082f0791be2e5a3cb8cf608d48967ba6eec6e76e55041128eec9d1f4efeb"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.674988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-72dh6" event={"ID":"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc","Type":"ContainerStarted","Data":"accd064468f56473741ca76a84041b3bee5a7ddc1cd6578da528a9a3430065e8"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.695413 4754 generic.go:334] "Generic (PLEG): container finished" podID="9e74a6ce-6ddf-437a-8b5b-4587c90df3f5" containerID="45b2545d32754aeca941700dbd8007c7e651ddaa3a0fd946d42b1067be1a2bce" exitCode=0 Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.695552 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" event={"ID":"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5","Type":"ContainerDied","Data":"45b2545d32754aeca941700dbd8007c7e651ddaa3a0fd946d42b1067be1a2bce"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.700816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" event={"ID":"59f7de33-73f2-480a-bc50-42be734c1764","Type":"ContainerStarted","Data":"bf85b72fe1efcc3e5d9a1585703e0897668846bd674ca554102f1e3ed46697f9"} Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.718739 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-srtrk" podStartSLOduration=129.718715789 podStartE2EDuration="2m9.718715789s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.692131194 +0000 UTC m=+154.142543990" watchObservedRunningTime="2026-02-18 19:20:51.718715789 +0000 UTC m=+154.169128585" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.718779 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.757190 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" podStartSLOduration=129.757128817 podStartE2EDuration="2m9.757128817s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.74901806 +0000 UTC m=+154.199430856" watchObservedRunningTime="2026-02-18 19:20:51.757128817 +0000 UTC m=+154.207541613" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.764360 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.764742 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.264728 +0000 UTC m=+154.715140796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.768034 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lt44t"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.806850 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.811304 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-jk4zv" podStartSLOduration=129.811271606 podStartE2EDuration="2m9.811271606s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.785993246 +0000 UTC m=+154.236406042" watchObservedRunningTime="2026-02-18 19:20:51.811271606 +0000 UTC m=+154.261684402" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.824858 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wljc4"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.827110 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6vw4g"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.828205 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh"] Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.855241 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nqhdr" podStartSLOduration=129.855216439 podStartE2EDuration="2m9.855216439s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.844836297 +0000 UTC m=+154.295249093" watchObservedRunningTime="2026-02-18 19:20:51.855216439 +0000 UTC m=+154.305629235" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.867853 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.868949 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.368921033 +0000 UTC m=+154.819333829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.869602 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.871074 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.371058613 +0000 UTC m=+154.821471419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.895880 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jcngv" podStartSLOduration=129.895864149 podStartE2EDuration="2m9.895864149s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.893510583 +0000 UTC m=+154.343923379" watchObservedRunningTime="2026-02-18 19:20:51.895864149 +0000 UTC m=+154.346276945" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.929677 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wm2j2" podStartSLOduration=129.929650316 podStartE2EDuration="2m9.929650316s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.927678571 +0000 UTC m=+154.378091367" watchObservedRunningTime="2026-02-18 19:20:51.929650316 +0000 UTC m=+154.380063102" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.968623 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-72j8q" podStartSLOduration=129.968587649 podStartE2EDuration="2m9.968587649s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:51.967373465 +0000 UTC m=+154.417786271" watchObservedRunningTime="2026-02-18 19:20:51.968587649 +0000 UTC m=+154.419000445" Feb 18 19:20:51 crc kubenswrapper[4754]: I0218 19:20:51.972895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:51 crc kubenswrapper[4754]: E0218 19:20:51.973227 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.473209369 +0000 UTC m=+154.923622165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: W0218 19:20:52.060325 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ebe1fc4_b055_4fe3_b40e_d7286a80a4ae.slice/crio-3f8938b84e36163ce988f888822b6b4143cec45057a734bdca4e36742ebcc8ec WatchSource:0}: Error finding container 3f8938b84e36163ce988f888822b6b4143cec45057a734bdca4e36742ebcc8ec: Status 404 returned error can't find the container with id 3f8938b84e36163ce988f888822b6b4143cec45057a734bdca4e36742ebcc8ec Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.076523 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.076815 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.576803395 +0000 UTC m=+155.027216191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.163899 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nszgz"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.177261 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.177648 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.677633163 +0000 UTC m=+155.128045959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.288590 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.289081 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.789070689 +0000 UTC m=+155.239483485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.299556 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.369466 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.373391 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:52 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:52 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:52 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.373453 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.376423 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z5tf7"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.389928 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.390980 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.890944526 +0000 UTC m=+155.341357352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.391850 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2hzgm"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.431106 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.448206 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-chrt8"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.490207 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.491110 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.491529 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:52.991493926 +0000 UTC m=+155.441906722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.503715 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5m74r"] Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.592692 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.593110 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.093081696 +0000 UTC m=+155.543494492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.697855 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.698460 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.198444761 +0000 UTC m=+155.648857557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: W0218 19:20:52.725775 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52eb66a8_dd7d_4688_a370_f9ad22a055e4.slice/crio-af112a19b7d6fd95bfa8fbff5572672cac7596057716e53d83ddee06e9a317bf WatchSource:0}: Error finding container af112a19b7d6fd95bfa8fbff5572672cac7596057716e53d83ddee06e9a317bf: Status 404 returned error can't find the container with id af112a19b7d6fd95bfa8fbff5572672cac7596057716e53d83ddee06e9a317bf Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.726100 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" event={"ID":"81d8d7f7-21a1-40a4-ba01-d54c91406f08","Type":"ContainerStarted","Data":"a6e37e876504c4359a718aa35f8ed6d5881015744d69f9b032d6a79677d7cf9a"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.731479 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" event={"ID":"a61192e1-b3ed-4fc0-80fc-499fe120edb4","Type":"ContainerStarted","Data":"c7134675ba8ee0de8d7ad3ee8302625c0031cbe29db723dea8c83b96162d9c95"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.784412 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nszgz" event={"ID":"c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777","Type":"ContainerStarted","Data":"e1c0c507b9caad7ed0c20830a7641657159277f5cc51700c8fdf9a774fcfda9e"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.787000 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" event={"ID":"4c8554fb-ba0f-48ac-900b-01d5a0c007ab","Type":"ContainerStarted","Data":"90f0006104261583b0db81b8699f24e8c5d2680332c6d23289a8e95aba566ee4"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.790406 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" event={"ID":"5079a199-58fd-45a6-8227-96b4dad59a01","Type":"ContainerStarted","Data":"4adc6a4a9011c5e619a4434e2492e9eb8838c8c99afa50b2f79f0a8e8511e622"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.792811 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z5tf7" event={"ID":"21556514-9470-431c-b12f-619e2ff69531","Type":"ContainerStarted","Data":"fbaecfdb017bea8df0a7f8e2d4b2bad9d85dbb9a380d9f1bca280aea9e7b8d15"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.794296 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" event={"ID":"f35420d7-13f8-4e0c-890f-fdaf97277ec3","Type":"ContainerStarted","Data":"df4cb868fca5e0043f41f417e532ffcd7a9521f12b3d1ba38771528ff0b7dd1a"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.796803 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" event={"ID":"ffc59d20-7b90-4a2e-bb61-feed9fb458e4","Type":"ContainerStarted","Data":"ab3dc6a7b341d688135679f9f8b04de289bd97e9d85692cfd665aef75912e4a6"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.798269 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.798470 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.298428706 +0000 UTC m=+155.748841502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.798544 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.798917 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.2989021 +0000 UTC m=+155.749314886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.800125 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" event={"ID":"19ba4f7e-bcca-4d8a-99f2-77e00a2eb255","Type":"ContainerStarted","Data":"135695f3e7b4aefa34d21e5ad8f018a49aec0981a8d7520a46f5d6262aad1c4f"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.817860 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" event={"ID":"e4ab90eb-251c-4e6b-965f-fbda2779d2ee","Type":"ContainerStarted","Data":"19fc250da191be2193bee32ddfc4e5ca3b5e4563f8f0dc2d3de1c8eb435e6894"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.840782 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-j45bm" podStartSLOduration=130.840758864 podStartE2EDuration="2m10.840758864s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:52.840691792 +0000 UTC m=+155.291104578" watchObservedRunningTime="2026-02-18 19:20:52.840758864 +0000 UTC m=+155.291171660" Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.843008 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-72sjp" podStartSLOduration=130.843001697 podStartE2EDuration="2m10.843001697s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:52.823599512 +0000 UTC m=+155.274012308" watchObservedRunningTime="2026-02-18 19:20:52.843001697 +0000 UTC m=+155.293414493" Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.854736 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" event={"ID":"a973e35e-58b0-4402-9dc5-9a30d414cc06","Type":"ContainerStarted","Data":"18c3149a67ec39e5bf99e36198fee8b6c26f047d7f80633781281e4e104e3bec"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.856214 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" event={"ID":"bb7de674-4481-4e2c-9cd7-95dd4fe12307","Type":"ContainerStarted","Data":"c99cc94ce26c911f16151d657f666e42a9fba15b2a630367e7f37d069db3d17d"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.884456 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" event={"ID":"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e","Type":"ContainerStarted","Data":"e8841be7b59ddf9e8fe30b2471a78a443adc91ce48e3f7604508946fe676b5c8"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.886003 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" event={"ID":"8d93df2f-dc09-41fb-845b-4d6f73a21c40","Type":"ContainerStarted","Data":"fc2d6ceea4ab9b684e4b3d2bf1e1d535fdf64bd34f34acdd1e514e5749602b99"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.886609 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2hzgm" event={"ID":"93c4fc30-a89f-4c7e-ac80-10b41321f818","Type":"ContainerStarted","Data":"f7f6ecd8d992a4315b1ebb0d5659d10b55519b5edb7c7bc5c3c38f74b7892f6a"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.896166 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" event={"ID":"f3e6c2ae-d03b-420b-9272-cfbdc82a78e1","Type":"ContainerStarted","Data":"7265744d552d2bddbe54ae063790c83760bf771133ea8598998ef1de56b2923a"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.898986 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7n89w" event={"ID":"74edc75d-25dd-4951-828b-9b86187007a5","Type":"ContainerStarted","Data":"02779d5a7f20c8c657aca16e329959bd88b3e618576cb4f57e7021319173583e"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.899367 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.900021 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.399998035 +0000 UTC m=+155.850410831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.900175 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:52 crc kubenswrapper[4754]: E0218 19:20:52.903165 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.403135603 +0000 UTC m=+155.853548399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.908197 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" event={"ID":"3ac3a0bc-736e-409b-978b-ba6f86b17ff0","Type":"ContainerStarted","Data":"dcc5937a48131d761fed791603e8733f6f3628f6c5849f699f02fbbfa6567f67"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.910832 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" event={"ID":"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae","Type":"ContainerStarted","Data":"3f8938b84e36163ce988f888822b6b4143cec45057a734bdca4e36742ebcc8ec"} Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.962385 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:52 crc kubenswrapper[4754]: I0218 19:20:52.964465 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.001227 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.001361 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.501337537 +0000 UTC m=+155.951750333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.001915 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.002428 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.502420738 +0000 UTC m=+155.952833534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.104781 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.105157 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.605112528 +0000 UTC m=+156.055525324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.105332 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.105650 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.605639083 +0000 UTC m=+156.056051879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.208755 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.209180 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.709159847 +0000 UTC m=+156.159572643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.317131 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.317880 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.817865776 +0000 UTC m=+156.268278572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.364582 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:53 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:53 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:53 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.364662 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.419354 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.419893 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:53.919874528 +0000 UTC m=+156.370287324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.427470 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.452525 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7n89w" podStartSLOduration=7.452506903 podStartE2EDuration="7.452506903s" podCreationTimestamp="2026-02-18 19:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:52.919131422 +0000 UTC m=+155.369544218" watchObservedRunningTime="2026-02-18 19:20:53.452506903 +0000 UTC m=+155.902919699" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.521070 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.521506 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.021488508 +0000 UTC m=+156.471901304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.622417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.622823 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.122775329 +0000 UTC m=+156.573188125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.622901 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.623372 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.123350805 +0000 UTC m=+156.573763601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.723496 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.723691 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.223668079 +0000 UTC m=+156.674080875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.724263 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.724845 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.224816682 +0000 UTC m=+156.675229668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.826183 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.826382 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.32634651 +0000 UTC m=+156.776759306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.826514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.826935 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.326918055 +0000 UTC m=+156.777330841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.924260 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" event={"ID":"3ac3a0bc-736e-409b-978b-ba6f86b17ff0","Type":"ContainerStarted","Data":"442f74ab6bb0488e9ef5c4f41aa2d16d3c39bc31b183a85aa0354fc21b755b0d"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.924335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" event={"ID":"3ac3a0bc-736e-409b-978b-ba6f86b17ff0","Type":"ContainerStarted","Data":"68ac49157653603eb3b480c5df976be4cf2636eb2fd7e64594f575445432a06b"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.925363 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.927102 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:53 crc kubenswrapper[4754]: E0218 19:20:53.927616 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.427595369 +0000 UTC m=+156.878008165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.929301 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" event={"ID":"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae","Type":"ContainerStarted","Data":"c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.929767 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.930613 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wljc4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.930656 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.932056 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" event={"ID":"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e","Type":"ContainerStarted","Data":"229d35e3f0b5b7fc8f5f79e919807c8ffb8676cd073ac321bef9c8bea8700569"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.935433 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" event={"ID":"a973e35e-58b0-4402-9dc5-9a30d414cc06","Type":"ContainerStarted","Data":"1e249754788472bcadf71a0ae54ccdf361b25088595168708627d9d11b5eb4f5"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.937585 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" event={"ID":"26b047a1-61d6-4237-93a2-82047effa98a","Type":"ContainerStarted","Data":"93cd1e67ca3f2d30187d6f2d0e748c8e64b2bee89c269d5ee77bc9d47a2477ea"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.939785 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2hzgm" event={"ID":"93c4fc30-a89f-4c7e-ac80-10b41321f818","Type":"ContainerStarted","Data":"0ec8f4fba011d7d89ddebecc3bbd68d754c0c71a1ef766b5157ee7be8dec0abb"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.939830 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2hzgm" event={"ID":"93c4fc30-a89f-4c7e-ac80-10b41321f818","Type":"ContainerStarted","Data":"b17fcd88ea5c0bfa4bf8f6e9acff169bd85803b8e7be3cd6e336d6aecb890f52"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.940513 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2hzgm" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.949594 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" event={"ID":"5fe98aac-9aed-4963-a4a9-eeaa65a11720","Type":"ContainerStarted","Data":"08a038581ae2e7c8a07f7c98298eb6a8baa1a8565a02738a768c74c0b8f8c65a"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.950529 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.952543 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bs2kd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.952579 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" podUID="5fe98aac-9aed-4963-a4a9-eeaa65a11720" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.956838 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" event={"ID":"f35420d7-13f8-4e0c-890f-fdaf97277ec3","Type":"ContainerStarted","Data":"31680c23927168ac69c16e66626c37f0a11cf2ec581577f4c209a28ecb1b3c5d"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.957303 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.958343 4754 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xpsvh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.958383 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" podUID="f35420d7-13f8-4e0c-890f-fdaf97277ec3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.959877 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" event={"ID":"c59f4aff-f25f-4882-92fa-3f033eb9b614","Type":"ContainerStarted","Data":"693e1f61b8bc016fbf94d0db37d3b18eb861fc2389970169e3dcd5a73f6a2b29"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.965951 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" event={"ID":"5b43afdc-1af8-457c-9218-d416a0bdadc3","Type":"ContainerStarted","Data":"210a69815d0161b4ad2ab57ecf49b57a660ce7031b47d9e3e3e2cf6b8eab25f4"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.966001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" event={"ID":"5b43afdc-1af8-457c-9218-d416a0bdadc3","Type":"ContainerStarted","Data":"a96f0c00c6ff9dd15f427dc12fb1e663eb3142673e2896e119c80c2811fdf26e"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.969277 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" event={"ID":"6d0c3c9b-2563-4887-a65e-d4777b64ad81","Type":"ContainerStarted","Data":"2be4e1cb0f2ee5c1c01990e0b88dbc9850d7e599b5962b72c77297b93018f55c"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.969329 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" event={"ID":"6d0c3c9b-2563-4887-a65e-d4777b64ad81","Type":"ContainerStarted","Data":"377a3d155f2a5583e45bb4991624293253e6297f85a214b083875a2740f48a3c"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.970782 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z5tf7" event={"ID":"21556514-9470-431c-b12f-619e2ff69531","Type":"ContainerStarted","Data":"380c8cab53f544f6ea4bee790b871a23734a57ac62959b08c4fac62e777ca338"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.973660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" event={"ID":"4c8554fb-ba0f-48ac-900b-01d5a0c007ab","Type":"ContainerStarted","Data":"d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.974695 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.976460 4754 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lt44t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.976541 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.979186 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" podStartSLOduration=131.979166856 podStartE2EDuration="2m11.979166856s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:53.977419258 +0000 UTC m=+156.427832054" watchObservedRunningTime="2026-02-18 19:20:53.979166856 +0000 UTC m=+156.429579642" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.982598 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" podStartSLOduration=131.982580402 podStartE2EDuration="2m11.982580402s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:53.949296909 +0000 UTC m=+156.399709715" watchObservedRunningTime="2026-02-18 19:20:53.982580402 +0000 UTC m=+156.432993198" Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.985875 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" event={"ID":"59f7de33-73f2-480a-bc50-42be734c1764","Type":"ContainerStarted","Data":"6f65f1ee7df87e11359d6c06868016a6e6c7c8fd421b075c56fa4adc02c683a9"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.992457 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" event={"ID":"8d93df2f-dc09-41fb-845b-4d6f73a21c40","Type":"ContainerStarted","Data":"798eec11156dba7b16883233805516d06f6a4dfd7682ac279586ae5fc7508c0e"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.992513 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" event={"ID":"8d93df2f-dc09-41fb-845b-4d6f73a21c40","Type":"ContainerStarted","Data":"3ff7e150215337f8af48cad959e734fdce79fae7b6b4b7a2fc35e2bf968ea8d7"} Feb 18 19:20:53 crc kubenswrapper[4754]: I0218 19:20:53.997182 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" event={"ID":"5079a199-58fd-45a6-8227-96b4dad59a01","Type":"ContainerStarted","Data":"89179a189a8a7ef8e76ace01ad1ac6e358b260804bf9614e7a0006328a26da8c"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.000960 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2hzgm" podStartSLOduration=8.000947577 podStartE2EDuration="8.000947577s" podCreationTimestamp="2026-02-18 19:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:53.999430045 +0000 UTC m=+156.449842841" watchObservedRunningTime="2026-02-18 19:20:54.000947577 +0000 UTC m=+156.451360373" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.005589 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-72dh6" event={"ID":"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc","Type":"ContainerStarted","Data":"039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.009842 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" event={"ID":"ffc59d20-7b90-4a2e-bb61-feed9fb458e4","Type":"ContainerStarted","Data":"1219552c2fe169493e8bcbfee3ff8ac7893c39a72cb0bdf53f36949d6865a197"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.010170 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.011387 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" event={"ID":"52eb66a8-dd7d-4688-a370-f9ad22a055e4","Type":"ContainerStarted","Data":"de200dff4e4d61ed00c44f42a7d12b5a279fda0f08786079b314ae03d2110f46"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.011447 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" event={"ID":"52eb66a8-dd7d-4688-a370-f9ad22a055e4","Type":"ContainerStarted","Data":"af112a19b7d6fd95bfa8fbff5572672cac7596057716e53d83ddee06e9a317bf"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.011808 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.011835 4754 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zn85c container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.011891 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" podUID="ffc59d20-7b90-4a2e-bb61-feed9fb458e4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.014697 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x8f86 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" start-of-body= Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.014743 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" podUID="52eb66a8-dd7d-4688-a370-f9ad22a055e4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": dial tcp 10.217.0.36:5443: connect: connection refused" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.014805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" event={"ID":"bb7de674-4481-4e2c-9cd7-95dd4fe12307","Type":"ContainerStarted","Data":"3c83be1ad06cfc3df6d60822181c6474925898f4a10df6d5a4ddf038bf6fba4e"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.022600 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nszgz" event={"ID":"c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777","Type":"ContainerStarted","Data":"52a08daeac24d9aa8bfe0bd061bdb3a671fab33ec63ef545cb265f4f38230193"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.023069 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.024891 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-nszgz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.024939 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nszgz" podUID="c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.025896 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" event={"ID":"9e74a6ce-6ddf-437a-8b5b-4587c90df3f5","Type":"ContainerStarted","Data":"975ac4a845ff21206ad40e76c89cf112c7df5e6ca3dded1f177082c1eeaf5a6a"} Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.028338 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.028712 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.528699645 +0000 UTC m=+156.979112441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.029185 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vphtb" podStartSLOduration=132.029173649 podStartE2EDuration="2m12.029173649s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.026496014 +0000 UTC m=+156.476908810" watchObservedRunningTime="2026-02-18 19:20:54.029173649 +0000 UTC m=+156.479586435" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.040469 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jr9lw" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.104524 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5sp2v" podStartSLOduration=132.104503402 podStartE2EDuration="2m12.104503402s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.102775904 +0000 UTC m=+156.553188700" watchObservedRunningTime="2026-02-18 19:20:54.104503402 +0000 UTC m=+156.554916198" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.104625 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" podStartSLOduration=132.104621355 podStartE2EDuration="2m12.104621355s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.067484943 +0000 UTC m=+156.517897739" watchObservedRunningTime="2026-02-18 19:20:54.104621355 +0000 UTC m=+156.555034151" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.129422 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.132551 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.632522188 +0000 UTC m=+157.082934994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.166641 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" podStartSLOduration=132.166619144 podStartE2EDuration="2m12.166619144s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.137833657 +0000 UTC m=+156.588246453" watchObservedRunningTime="2026-02-18 19:20:54.166619144 +0000 UTC m=+156.617031940" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.169876 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" podStartSLOduration=132.169866016 podStartE2EDuration="2m12.169866016s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.16539041 +0000 UTC m=+156.615803206" watchObservedRunningTime="2026-02-18 19:20:54.169866016 +0000 UTC m=+156.620278812" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.239650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.240157 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.740119666 +0000 UTC m=+157.190532462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.258744 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gmzvv" podStartSLOduration=132.258715558 podStartE2EDuration="2m12.258715558s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.210511605 +0000 UTC m=+156.660924401" watchObservedRunningTime="2026-02-18 19:20:54.258715558 +0000 UTC m=+156.709128354" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.259037 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" podStartSLOduration=132.259030557 podStartE2EDuration="2m12.259030557s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.255805066 +0000 UTC m=+156.706217862" watchObservedRunningTime="2026-02-18 19:20:54.259030557 +0000 UTC m=+156.709443353" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.278390 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-znncb" podStartSLOduration=132.278371019 podStartE2EDuration="2m12.278371019s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.277101584 +0000 UTC m=+156.727514390" watchObservedRunningTime="2026-02-18 19:20:54.278371019 +0000 UTC m=+156.728783815" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.302214 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-46gfl" podStartSLOduration=132.302190948 podStartE2EDuration="2m12.302190948s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.299265326 +0000 UTC m=+156.749678132" watchObservedRunningTime="2026-02-18 19:20:54.302190948 +0000 UTC m=+156.752603744" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.341611 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.342070 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.842042005 +0000 UTC m=+157.292454801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.357469 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:54 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:54 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:54 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.357532 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.379248 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" podStartSLOduration=132.379228858 podStartE2EDuration="2m12.379228858s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.33935758 +0000 UTC m=+156.789770376" watchObservedRunningTime="2026-02-18 19:20:54.379228858 +0000 UTC m=+156.829641654" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.379511 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nszgz" podStartSLOduration=132.379507836 podStartE2EDuration="2m12.379507836s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.379084034 +0000 UTC m=+156.829496840" watchObservedRunningTime="2026-02-18 19:20:54.379507836 +0000 UTC m=+156.829920632" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.400953 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-smgx9" podStartSLOduration=132.400936617 podStartE2EDuration="2m12.400936617s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.400009122 +0000 UTC m=+156.850421918" watchObservedRunningTime="2026-02-18 19:20:54.400936617 +0000 UTC m=+156.851349413" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.442712 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.443044 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:54.943029898 +0000 UTC m=+157.393442694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.447358 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" podStartSLOduration=132.447337839 podStartE2EDuration="2m12.447337839s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.439659944 +0000 UTC m=+156.890072740" watchObservedRunningTime="2026-02-18 19:20:54.447337839 +0000 UTC m=+156.897750635" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.457789 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" podStartSLOduration=132.457774702 podStartE2EDuration="2m12.457774702s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.456294931 +0000 UTC m=+156.906707727" watchObservedRunningTime="2026-02-18 19:20:54.457774702 +0000 UTC m=+156.908187488" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.543975 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.043959819 +0000 UTC m=+157.494372615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.543887 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.544183 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.544475 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.044467623 +0000 UTC m=+157.494880419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.551389 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-chrt8" podStartSLOduration=132.551377518 podStartE2EDuration="2m12.551377518s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.48979525 +0000 UTC m=+156.940208046" watchObservedRunningTime="2026-02-18 19:20:54.551377518 +0000 UTC m=+157.001790314" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.574017 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-72dh6" podStartSLOduration=132.573999942 podStartE2EDuration="2m12.573999942s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.573373324 +0000 UTC m=+157.023786120" watchObservedRunningTime="2026-02-18 19:20:54.573999942 +0000 UTC m=+157.024412738" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.576062 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" podStartSLOduration=132.57605615 podStartE2EDuration="2m12.57605615s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.553828856 +0000 UTC m=+157.004241652" watchObservedRunningTime="2026-02-18 19:20:54.57605615 +0000 UTC m=+157.026468946" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.637291 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z5tf7" podStartSLOduration=8.637266336 podStartE2EDuration="8.637266336s" podCreationTimestamp="2026-02-18 19:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.618772058 +0000 UTC m=+157.069184854" watchObservedRunningTime="2026-02-18 19:20:54.637266336 +0000 UTC m=+157.087679132" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.645496 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.645761 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.145744814 +0000 UTC m=+157.596157610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.698035 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5m74r" podStartSLOduration=132.69802 podStartE2EDuration="2m12.69802s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.69299455 +0000 UTC m=+157.143407346" watchObservedRunningTime="2026-02-18 19:20:54.69802 +0000 UTC m=+157.148432796" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.746869 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.747543 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.24751989 +0000 UTC m=+157.697932686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.836771 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rht4g" podStartSLOduration=132.836748862 podStartE2EDuration="2m12.836748862s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.778046236 +0000 UTC m=+157.228459022" watchObservedRunningTime="2026-02-18 19:20:54.836748862 +0000 UTC m=+157.287161668" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.839838 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95bqg" podStartSLOduration=132.839828799 podStartE2EDuration="2m12.839828799s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:54.836261229 +0000 UTC m=+157.286674025" watchObservedRunningTime="2026-02-18 19:20:54.839828799 +0000 UTC m=+157.290241595" Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.848489 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.848750 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.348731158 +0000 UTC m=+157.799143954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:54 crc kubenswrapper[4754]: I0218 19:20:54.950214 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:54 crc kubenswrapper[4754]: E0218 19:20:54.950714 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.450690658 +0000 UTC m=+157.901103454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.003305 4754 csr.go:261] certificate signing request csr-skctf is approved, waiting to be issued Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.010971 4754 csr.go:257] certificate signing request csr-skctf is issued Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.032081 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" event={"ID":"81d8d7f7-21a1-40a4-ba01-d54c91406f08","Type":"ContainerStarted","Data":"1b68b80258f8a646899d6136ce43c3ea57079f721e844abf441f3db42b841efb"} Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.033197 4754 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wljc4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.033241 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.033488 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-nszgz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.033510 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nszgz" podUID="c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.036653 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.039112 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xpsvh" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.051260 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.053292 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.553269775 +0000 UTC m=+158.003682581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.159804 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.160290 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.660272867 +0000 UTC m=+158.110685663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.181641 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zn85c" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.261560 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.261794 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.761763444 +0000 UTC m=+158.212176240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.261995 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.262328 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.76231783 +0000 UTC m=+158.212730626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.356297 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:55 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:55 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:55 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.356344 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.362770 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.362937 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.862911682 +0000 UTC m=+158.313324478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.363089 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.363395 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.863387545 +0000 UTC m=+158.313800341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.463947 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.464159 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.964120001 +0000 UTC m=+158.414532797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.464246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.464547 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:55.964538051 +0000 UTC m=+158.414950847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.555463 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ltzt5" Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.565303 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.565698 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.065682529 +0000 UTC m=+158.516095325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.666448 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.666789 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.166775775 +0000 UTC m=+158.617188571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.768184 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.768405 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.268375275 +0000 UTC m=+158.718788071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.768486 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.768842 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.268831548 +0000 UTC m=+158.719244344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.870014 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.870217 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.370189401 +0000 UTC m=+158.820602187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.870302 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.870585 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.370572101 +0000 UTC m=+158.820984897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.971275 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.971494 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.471461281 +0000 UTC m=+158.921874077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:55 crc kubenswrapper[4754]: I0218 19:20:55.971772 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:55 crc kubenswrapper[4754]: E0218 19:20:55.972250 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.472231113 +0000 UTC m=+158.922643909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.012593 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 19:15:55 +0000 UTC, rotation deadline is 2026-12-08 10:24:31.84631639 +0000 UTC Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.012638 4754 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7023h3m35.833680423s for next certificate rotation Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.032676 4754 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x8f86 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.032748 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" podUID="52eb66a8-dd7d-4688-a370-f9ad22a055e4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.36:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.032921 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bs2kd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.032978 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" podUID="5fe98aac-9aed-4963-a4a9-eeaa65a11720" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.033479 4754 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lt44t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.033569 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.060982 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" event={"ID":"81d8d7f7-21a1-40a4-ba01-d54c91406f08","Type":"ContainerStarted","Data":"cd5d505d459ce205c8996c3a9ad6003f0cdeeb9d8ca356dee8e699f9d982f1e4"} Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.061053 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" event={"ID":"81d8d7f7-21a1-40a4-ba01-d54c91406f08","Type":"ContainerStarted","Data":"93d8b558cc881f9cb0799412528bd2ea3e060c5c6fafa0f9a290f056fd4bbdf1"} Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.062119 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-nszgz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.062201 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nszgz" podUID="c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.064393 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.073244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:56 crc kubenswrapper[4754]: E0218 19:20:56.073455 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.573422721 +0000 UTC m=+159.023835517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.074010 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:56 crc kubenswrapper[4754]: E0218 19:20:56.076383 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.576362324 +0000 UTC m=+159.026775120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.099339 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x8f86" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.175260 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:56 crc kubenswrapper[4754]: E0218 19:20:56.176138 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.676114112 +0000 UTC m=+159.126526908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.276339 4754 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.277239 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:56 crc kubenswrapper[4754]: E0218 19:20:56.277726 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.777710602 +0000 UTC m=+159.228123398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-qcqwx" (UID: "a00863d2-1742-42b7-a47e-beef12e21834") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.359342 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:56 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:56 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:56 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.359403 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.377938 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:56 crc kubenswrapper[4754]: E0218 19:20:56.378363 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:20:56.878345985 +0000 UTC m=+159.328758781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.454198 4754 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T19:20:56.276369264Z","Handler":null,"Name":""} Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.461622 4754 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.461656 4754 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.471325 4754 patch_prober.go:28] interesting pod/console-operator-58897d9998-bs2kd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 19:20:56 crc kubenswrapper[4754]: [+]log ok Feb 18 19:20:56 crc kubenswrapper[4754]: [-]poststarthook/max-in-flight-filter failed: reason withheld Feb 18 19:20:56 crc kubenswrapper[4754]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 19:20:56 crc kubenswrapper[4754]: [+]shutdown ok Feb 18 19:20:56 crc kubenswrapper[4754]: readyz check failed Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.471386 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" podUID="5fe98aac-9aed-4963-a4a9-eeaa65a11720" containerName="console-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.479992 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.518772 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.518817 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.521284 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6p9z2"] Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.522223 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.524118 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.545176 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.581432 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-utilities\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.581483 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h7hm\" (UniqueName: \"kubernetes.io/projected/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-kube-api-access-5h7hm\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.581593 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-catalog-content\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.646951 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-qcqwx\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.660007 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6p9z2"] Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.686466 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.686640 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h7hm\" (UniqueName: \"kubernetes.io/projected/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-kube-api-access-5h7hm\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.686698 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-catalog-content\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.686742 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-utilities\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.687936 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-catalog-content\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.688163 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-utilities\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.708226 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.716821 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h7hm\" (UniqueName: \"kubernetes.io/projected/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-kube-api-access-5h7hm\") pod \"community-operators-6p9z2\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.735187 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.742008 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x9nlb"] Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.743001 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.799591 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.801115 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-utilities\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.801200 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-catalog-content\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.801278 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqhd\" (UniqueName: \"kubernetes.io/projected/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-kube-api-access-7vqhd\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.810238 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x9nlb"] Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.839040 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.902788 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-utilities\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.902860 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-catalog-content\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.902895 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqhd\" (UniqueName: \"kubernetes.io/projected/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-kube-api-access-7vqhd\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.903487 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-utilities\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.903634 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-catalog-content\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.940313 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mqhjn"] Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.941334 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:56 crc kubenswrapper[4754]: I0218 19:20:56.964842 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqhd\" (UniqueName: \"kubernetes.io/projected/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-kube-api-access-7vqhd\") pod \"certified-operators-x9nlb\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.004375 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-catalog-content\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.004442 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5bfl\" (UniqueName: \"kubernetes.io/projected/c74046c9-1f25-4668-b742-abae28a18c9b-kube-api-access-n5bfl\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.004633 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-utilities\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.031480 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqhjn"] Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.093217 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" event={"ID":"81d8d7f7-21a1-40a4-ba01-d54c91406f08","Type":"ContainerStarted","Data":"40488b7d3e5377e2b297d354ed16e1548f5f6bfdd27b6cebaf7a555d9fefa995"} Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.106973 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-catalog-content\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.107056 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5bfl\" (UniqueName: \"kubernetes.io/projected/c74046c9-1f25-4668-b742-abae28a18c9b-kube-api-access-n5bfl\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.107188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-utilities\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.113045 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-74s7p"] Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.114120 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.115363 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-catalog-content\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.115456 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-utilities\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.127475 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.139019 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5bfl\" (UniqueName: \"kubernetes.io/projected/c74046c9-1f25-4668-b742-abae28a18c9b-kube-api-access-n5bfl\") pod \"community-operators-mqhjn\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.139986 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6vw4g" podStartSLOduration=11.139957098 podStartE2EDuration="11.139957098s" podCreationTimestamp="2026-02-18 19:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:57.121413858 +0000 UTC m=+159.571826674" watchObservedRunningTime="2026-02-18 19:20:57.139957098 +0000 UTC m=+159.590369894" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.146378 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74s7p"] Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.258511 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.312937 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kfjn\" (UniqueName: \"kubernetes.io/projected/54801380-5317-40df-b2c8-1a392650cc50-kube-api-access-4kfjn\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.312985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-catalog-content\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.313020 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-utilities\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.363920 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:57 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:57 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:57 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.363966 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.414332 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-utilities\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.414421 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kfjn\" (UniqueName: \"kubernetes.io/projected/54801380-5317-40df-b2c8-1a392650cc50-kube-api-access-4kfjn\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.414447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-catalog-content\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.416177 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-utilities\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.416393 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-catalog-content\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.433421 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6p9z2"] Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.441544 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kfjn\" (UniqueName: \"kubernetes.io/projected/54801380-5317-40df-b2c8-1a392650cc50-kube-api-access-4kfjn\") pod \"certified-operators-74s7p\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: W0218 19:20:57.444491 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4699f1a8_9e55_49b6_a67f_f84bd256fa0f.slice/crio-1a02ee229b0afc9302d9d4d284435808a2a27936f5d96f500e91a41e42a77736 WatchSource:0}: Error finding container 1a02ee229b0afc9302d9d4d284435808a2a27936f5d96f500e91a41e42a77736: Status 404 returned error can't find the container with id 1a02ee229b0afc9302d9d4d284435808a2a27936f5d96f500e91a41e42a77736 Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.463426 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x9nlb"] Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.465418 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.485564 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mqhjn"] Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.537895 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qcqwx"] Feb 18 19:20:57 crc kubenswrapper[4754]: W0218 19:20:57.556311 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda00863d2_1742_42b7_a47e_beef12e21834.slice/crio-6c50848b71a4da224a2f06010c1115df88f0b9708904e55f278a3faa818dc039 WatchSource:0}: Error finding container 6c50848b71a4da224a2f06010c1115df88f0b9708904e55f278a3faa818dc039: Status 404 returned error can't find the container with id 6c50848b71a4da224a2f06010c1115df88f0b9708904e55f278a3faa818dc039 Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.676544 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.676605 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.684611 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:57 crc kubenswrapper[4754]: I0218 19:20:57.771063 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74s7p"] Feb 18 19:20:57 crc kubenswrapper[4754]: W0218 19:20:57.789788 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54801380_5317_40df_b2c8_1a392650cc50.slice/crio-1e7c3f7523f55f4d09e68dd3acd775a206cc23c7162cfe46d2e8e497374d05a3 WatchSource:0}: Error finding container 1e7c3f7523f55f4d09e68dd3acd775a206cc23c7162cfe46d2e8e497374d05a3: Status 404 returned error can't find the container with id 1e7c3f7523f55f4d09e68dd3acd775a206cc23c7162cfe46d2e8e497374d05a3 Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.100437 4754 generic.go:334] "Generic (PLEG): container finished" podID="c74046c9-1f25-4668-b742-abae28a18c9b" containerID="0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416" exitCode=0 Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.100514 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhjn" event={"ID":"c74046c9-1f25-4668-b742-abae28a18c9b","Type":"ContainerDied","Data":"0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.102335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhjn" event={"ID":"c74046c9-1f25-4668-b742-abae28a18c9b","Type":"ContainerStarted","Data":"3e7a6bf34cba4fa6e4e61d24143faa065045eec71128b4175ed7379a9f427623"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.102900 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.105109 4754 generic.go:334] "Generic (PLEG): container finished" podID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerID="246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984" exitCode=0 Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.105300 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9nlb" event={"ID":"a91e02b9-77f2-4adc-8255-ef6dca75c2cf","Type":"ContainerDied","Data":"246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.105368 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9nlb" event={"ID":"a91e02b9-77f2-4adc-8255-ef6dca75c2cf","Type":"ContainerStarted","Data":"be287a567d83c993343d142be4d6e70a903de9ca8b20dbf261de162e3b7b53e0"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.109746 4754 generic.go:334] "Generic (PLEG): container finished" podID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerID="c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99" exitCode=0 Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.109816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p9z2" event={"ID":"4699f1a8-9e55-49b6-a67f-f84bd256fa0f","Type":"ContainerDied","Data":"c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.109847 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p9z2" event={"ID":"4699f1a8-9e55-49b6-a67f-f84bd256fa0f","Type":"ContainerStarted","Data":"1a02ee229b0afc9302d9d4d284435808a2a27936f5d96f500e91a41e42a77736"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.116616 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" event={"ID":"a00863d2-1742-42b7-a47e-beef12e21834","Type":"ContainerStarted","Data":"003f5e45a83da7edaf2296110784775e03894de115ee60a31635a67cf1ffff9a"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.116749 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" event={"ID":"a00863d2-1742-42b7-a47e-beef12e21834","Type":"ContainerStarted","Data":"6c50848b71a4da224a2f06010c1115df88f0b9708904e55f278a3faa818dc039"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.116901 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.119797 4754 generic.go:334] "Generic (PLEG): container finished" podID="54801380-5317-40df-b2c8-1a392650cc50" containerID="f8d08d97a15d9d2da9693fa9bd8df322a8c13121e4f554b3a1e90259da7232c6" exitCode=0 Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.121615 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74s7p" event={"ID":"54801380-5317-40df-b2c8-1a392650cc50","Type":"ContainerDied","Data":"f8d08d97a15d9d2da9693fa9bd8df322a8c13121e4f554b3a1e90259da7232c6"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.121721 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74s7p" event={"ID":"54801380-5317-40df-b2c8-1a392650cc50","Type":"ContainerStarted","Data":"1e7c3f7523f55f4d09e68dd3acd775a206cc23c7162cfe46d2e8e497374d05a3"} Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.140824 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-k7hhc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.149758 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" podStartSLOduration=136.149709862 podStartE2EDuration="2m16.149709862s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:58.146333158 +0000 UTC m=+160.596745954" watchObservedRunningTime="2026-02-18 19:20:58.149709862 +0000 UTC m=+160.600122678" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.221986 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.328307 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.329318 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.336901 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.337493 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.360343 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:58 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:58 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:58 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.360411 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.389263 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.434014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.434161 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.535130 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.535542 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.535631 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.563192 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.651341 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.716913 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k2vnz"] Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.718105 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.724653 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.736987 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2vnz"] Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.840584 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-utilities\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.840667 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcv9v\" (UniqueName: \"kubernetes.io/projected/cabec9c1-d434-4382-87e9-c488658c02fe-kube-api-access-fcv9v\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.840703 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-catalog-content\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.942188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-utilities\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.942676 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcv9v\" (UniqueName: \"kubernetes.io/projected/cabec9c1-d434-4382-87e9-c488658c02fe-kube-api-access-fcv9v\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.942701 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-catalog-content\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.943322 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-catalog-content\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.943617 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-utilities\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:58 crc kubenswrapper[4754]: I0218 19:20:58.962317 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcv9v\" (UniqueName: \"kubernetes.io/projected/cabec9c1-d434-4382-87e9-c488658c02fe-kube-api-access-fcv9v\") pod \"redhat-marketplace-k2vnz\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.039048 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.117618 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8x2"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.118928 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.126840 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.153119 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8x2"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.254446 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpdb\" (UniqueName: \"kubernetes.io/projected/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-kube-api-access-cxpdb\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.254920 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-catalog-content\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.254947 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-utilities\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.310244 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2vnz"] Feb 18 19:20:59 crc kubenswrapper[4754]: W0218 19:20:59.349725 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcabec9c1_d434_4382_87e9_c488658c02fe.slice/crio-d62f67f0091026e4e15b210099841ac49e007d4acbffde65f973d782e28a3103 WatchSource:0}: Error finding container d62f67f0091026e4e15b210099841ac49e007d4acbffde65f973d782e28a3103: Status 404 returned error can't find the container with id d62f67f0091026e4e15b210099841ac49e007d4acbffde65f973d782e28a3103 Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.353380 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.356106 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-catalog-content\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.356189 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-utilities\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.356371 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxpdb\" (UniqueName: \"kubernetes.io/projected/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-kube-api-access-cxpdb\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.358108 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-catalog-content\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.358347 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-utilities\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.359197 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:59 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:20:59 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:20:59 crc kubenswrapper[4754]: healthz check failed Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.359240 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.377363 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxpdb\" (UniqueName: \"kubernetes.io/projected/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-kube-api-access-cxpdb\") pod \"redhat-marketplace-5g8x2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.455916 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.471222 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.472161 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.474115 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.475254 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bs2kd" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.480344 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.481061 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.481108 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.481606 4754 patch_prober.go:28] interesting pod/console-f9d7485db-72dh6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.481699 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-72dh6" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.519259 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.560094 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.560176 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.672082 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.672767 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.673007 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.707182 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.735760 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kqd9p"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.738827 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.758454 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.758451 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqd9p"] Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.819271 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.877363 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-nszgz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.877905 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nszgz" podUID="c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.877364 4754 patch_prober.go:28] interesting pod/downloads-7954f5f757-nszgz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.878326 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nszgz" podUID="c5f2fe27-4245-4b3e-bdf2-e65a5d6e4777" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.40:8080/\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.891510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-utilities\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.891638 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-catalog-content\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.891667 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szs7j\" (UniqueName: \"kubernetes.io/projected/51b42468-3dc4-425d-ae7c-de59263bbf39-kube-api-access-szs7j\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.994082 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-utilities\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.994219 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-catalog-content\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.994250 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szs7j\" (UniqueName: \"kubernetes.io/projected/51b42468-3dc4-425d-ae7c-de59263bbf39-kube-api-access-szs7j\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.994787 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-catalog-content\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:20:59 crc kubenswrapper[4754]: I0218 19:20:59.994821 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-utilities\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.058525 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szs7j\" (UniqueName: \"kubernetes.io/projected/51b42468-3dc4-425d-ae7c-de59263bbf39-kube-api-access-szs7j\") pod \"redhat-operators-kqd9p\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.121322 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n4mf2"] Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.122416 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.140887 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8x2"] Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.153002 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4mf2"] Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.179663 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.214282 4754 generic.go:334] "Generic (PLEG): container finished" podID="cabec9c1-d434-4382-87e9-c488658c02fe" containerID="d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056" exitCode=0 Feb 18 19:21:00 crc kubenswrapper[4754]: W0218 19:21:00.222312 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod666f2f31_98d7_4fd3_ac3a_2a345ea089e2.slice/crio-e68d136738505ede56f9b84d789220d5e8c6968985431c30be4e7ea5850f384c WatchSource:0}: Error finding container e68d136738505ede56f9b84d789220d5e8c6968985431c30be4e7ea5850f384c: Status 404 returned error can't find the container with id e68d136738505ede56f9b84d789220d5e8c6968985431c30be4e7ea5850f384c Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.224968 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2vnz" event={"ID":"cabec9c1-d434-4382-87e9-c488658c02fe","Type":"ContainerDied","Data":"d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056"} Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.225030 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2vnz" event={"ID":"cabec9c1-d434-4382-87e9-c488658c02fe","Type":"ContainerStarted","Data":"d62f67f0091026e4e15b210099841ac49e007d4acbffde65f973d782e28a3103"} Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.243324 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"019d9051-4d5f-4083-bbb1-b9165ecc2dd0","Type":"ContainerStarted","Data":"59a52dc47b8857ff8a485c07a2dd9c834713bfc27a87b2c4ab58b04f73d185f5"} Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.243397 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"019d9051-4d5f-4083-bbb1-b9165ecc2dd0","Type":"ContainerStarted","Data":"9744b06bdc4ea31393d5c6aaf2a0b94623de8f2c21379d2266033199aa7a5766"} Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.299024 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-utilities\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.299547 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmgw\" (UniqueName: \"kubernetes.io/projected/825e85f9-84d0-4bc6-b250-29365ffbbc38-kube-api-access-plmgw\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.299609 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-catalog-content\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.355990 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:21:00 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:21:00 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:21:00 crc kubenswrapper[4754]: healthz check failed Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.356041 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.403797 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-catalog-content\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.403926 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-utilities\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.403966 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plmgw\" (UniqueName: \"kubernetes.io/projected/825e85f9-84d0-4bc6-b250-29365ffbbc38-kube-api-access-plmgw\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.404709 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-catalog-content\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.405862 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-utilities\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.425000 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plmgw\" (UniqueName: \"kubernetes.io/projected/825e85f9-84d0-4bc6-b250-29365ffbbc38-kube-api-access-plmgw\") pod \"redhat-operators-n4mf2\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.450481 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.451861 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.451844018 podStartE2EDuration="2.451844018s" podCreationTimestamp="2026-02-18 19:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:21:00.278109045 +0000 UTC m=+162.728521841" watchObservedRunningTime="2026-02-18 19:21:00.451844018 +0000 UTC m=+162.902256804" Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.452313 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.545372 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqd9p"] Feb 18 19:21:00 crc kubenswrapper[4754]: I0218 19:21:00.941979 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4mf2"] Feb 18 19:21:00 crc kubenswrapper[4754]: W0218 19:21:00.969332 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod825e85f9_84d0_4bc6_b250_29365ffbbc38.slice/crio-388a5cfe8c234200bc1cc60a65d28a9edef8ca783945b063db372214bbd0353b WatchSource:0}: Error finding container 388a5cfe8c234200bc1cc60a65d28a9edef8ca783945b063db372214bbd0353b: Status 404 returned error can't find the container with id 388a5cfe8c234200bc1cc60a65d28a9edef8ca783945b063db372214bbd0353b Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.259106 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec35fd34-34c2-42e0-8e8d-4b210660e5ed","Type":"ContainerStarted","Data":"65296b13103fe28c462755ad287e4c108582f6b5804b53411077662cf897d926"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.264609 4754 generic.go:334] "Generic (PLEG): container finished" podID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerID="c0cb6ca0f73048ccda2489a3386299d9782981150a5db571c52205eb54543567" exitCode=0 Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.264712 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8x2" event={"ID":"666f2f31-98d7-4fd3-ac3a-2a345ea089e2","Type":"ContainerDied","Data":"c0cb6ca0f73048ccda2489a3386299d9782981150a5db571c52205eb54543567"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.264754 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8x2" event={"ID":"666f2f31-98d7-4fd3-ac3a-2a345ea089e2","Type":"ContainerStarted","Data":"e68d136738505ede56f9b84d789220d5e8c6968985431c30be4e7ea5850f384c"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.274041 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerStarted","Data":"8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.274085 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerStarted","Data":"388a5cfe8c234200bc1cc60a65d28a9edef8ca783945b063db372214bbd0353b"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.278096 4754 generic.go:334] "Generic (PLEG): container finished" podID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerID="deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b" exitCode=0 Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.278446 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerDied","Data":"deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.278480 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerStarted","Data":"99f72b88150e585ba3b44b71829ef7041459cbc65ff0c85968a237c0f5e9ba21"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.287176 4754 generic.go:334] "Generic (PLEG): container finished" podID="019d9051-4d5f-4083-bbb1-b9165ecc2dd0" containerID="59a52dc47b8857ff8a485c07a2dd9c834713bfc27a87b2c4ab58b04f73d185f5" exitCode=0 Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.287226 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"019d9051-4d5f-4083-bbb1-b9165ecc2dd0","Type":"ContainerDied","Data":"59a52dc47b8857ff8a485c07a2dd9c834713bfc27a87b2c4ab58b04f73d185f5"} Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.355954 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:21:01 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:21:01 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:21:01 crc kubenswrapper[4754]: healthz check failed Feb 18 19:21:01 crc kubenswrapper[4754]: I0218 19:21:01.356534 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.331816 4754 generic.go:334] "Generic (PLEG): container finished" podID="ec35fd34-34c2-42e0-8e8d-4b210660e5ed" containerID="ef92e8ffc4ca9bcae069be31d912d7154a588b381e13209e95eedd6fb4630262" exitCode=0 Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.331892 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec35fd34-34c2-42e0-8e8d-4b210660e5ed","Type":"ContainerDied","Data":"ef92e8ffc4ca9bcae069be31d912d7154a588b381e13209e95eedd6fb4630262"} Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.355665 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:21:02 crc kubenswrapper[4754]: [-]has-synced failed: reason withheld Feb 18 19:21:02 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:21:02 crc kubenswrapper[4754]: healthz check failed Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.355726 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.369705 4754 generic.go:334] "Generic (PLEG): container finished" podID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerID="8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5" exitCode=0 Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.369943 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerDied","Data":"8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5"} Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.782666 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.856900 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kube-api-access\") pod \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.856956 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kubelet-dir\") pod \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\" (UID: \"019d9051-4d5f-4083-bbb1-b9165ecc2dd0\") " Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.857434 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "019d9051-4d5f-4083-bbb1-b9165ecc2dd0" (UID: "019d9051-4d5f-4083-bbb1-b9165ecc2dd0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.863831 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "019d9051-4d5f-4083-bbb1-b9165ecc2dd0" (UID: "019d9051-4d5f-4083-bbb1-b9165ecc2dd0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.958681 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:02 crc kubenswrapper[4754]: I0218 19:21:02.958716 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/019d9051-4d5f-4083-bbb1-b9165ecc2dd0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.358190 4754 patch_prober.go:28] interesting pod/router-default-5444994796-72j8q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:21:03 crc kubenswrapper[4754]: [+]has-synced ok Feb 18 19:21:03 crc kubenswrapper[4754]: [+]process-running ok Feb 18 19:21:03 crc kubenswrapper[4754]: healthz check failed Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.358294 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-72j8q" podUID="996b25cf-442f-475a-93aa-3957be55d4f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.403070 4754 generic.go:334] "Generic (PLEG): container finished" podID="9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" containerID="229d35e3f0b5b7fc8f5f79e919807c8ffb8676cd073ac321bef9c8bea8700569" exitCode=0 Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.403184 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" event={"ID":"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e","Type":"ContainerDied","Data":"229d35e3f0b5b7fc8f5f79e919807c8ffb8676cd073ac321bef9c8bea8700569"} Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.410790 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.411998 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"019d9051-4d5f-4083-bbb1-b9165ecc2dd0","Type":"ContainerDied","Data":"9744b06bdc4ea31393d5c6aaf2a0b94623de8f2c21379d2266033199aa7a5766"} Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.412073 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9744b06bdc4ea31393d5c6aaf2a0b94623de8f2c21379d2266033199aa7a5766" Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.841046 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.985799 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kubelet-dir\") pod \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.985889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kube-api-access\") pod \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\" (UID: \"ec35fd34-34c2-42e0-8e8d-4b210660e5ed\") " Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.985991 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec35fd34-34c2-42e0-8e8d-4b210660e5ed" (UID: "ec35fd34-34c2-42e0-8e8d-4b210660e5ed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:03 crc kubenswrapper[4754]: I0218 19:21:03.986280 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.022553 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec35fd34-34c2-42e0-8e8d-4b210660e5ed" (UID: "ec35fd34-34c2-42e0-8e8d-4b210660e5ed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.087909 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec35fd34-34c2-42e0-8e8d-4b210660e5ed-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.356258 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.359061 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-72j8q" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.447211 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.444322 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ec35fd34-34c2-42e0-8e8d-4b210660e5ed","Type":"ContainerDied","Data":"65296b13103fe28c462755ad287e4c108582f6b5804b53411077662cf897d926"} Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.447635 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65296b13103fe28c462755ad287e4c108582f6b5804b53411077662cf897d926" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.728198 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.804778 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwcvf\" (UniqueName: \"kubernetes.io/projected/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-kube-api-access-qwcvf\") pod \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.804904 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-secret-volume\") pod \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.805050 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-config-volume\") pod \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\" (UID: \"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e\") " Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.805994 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" (UID: "9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.820461 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-kube-api-access-qwcvf" (OuterVolumeSpecName: "kube-api-access-qwcvf") pod "9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" (UID: "9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e"). InnerVolumeSpecName "kube-api-access-qwcvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.823447 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" (UID: "9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.907424 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.907478 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwcvf\" (UniqueName: \"kubernetes.io/projected/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-kube-api-access-qwcvf\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:04 crc kubenswrapper[4754]: I0218 19:21:04.907500 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.011772 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2hzgm" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.116048 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.121006 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/539505bb-b2d2-4adc-be1e-a95f73778a52-metrics-certs\") pod \"network-metrics-daemon-qztvz\" (UID: \"539505bb-b2d2-4adc-be1e-a95f73778a52\") " pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.171065 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qztvz" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.489240 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" event={"ID":"9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e","Type":"ContainerDied","Data":"e8841be7b59ddf9e8fe30b2471a78a443adc91ce48e3f7604508946fe676b5c8"} Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.489525 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8841be7b59ddf9e8fe30b2471a78a443adc91ce48e3f7604508946fe676b5c8" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.489323 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s" Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.575775 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qztvz"] Feb 18 19:21:05 crc kubenswrapper[4754]: W0218 19:21:05.617924 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod539505bb_b2d2_4adc_be1e_a95f73778a52.slice/crio-cbfb8a687a38f4df2098adb240a4541619b2d9155f1b807f0f6c960b84ea3e05 WatchSource:0}: Error finding container cbfb8a687a38f4df2098adb240a4541619b2d9155f1b807f0f6c960b84ea3e05: Status 404 returned error can't find the container with id cbfb8a687a38f4df2098adb240a4541619b2d9155f1b807f0f6c960b84ea3e05 Feb 18 19:21:05 crc kubenswrapper[4754]: I0218 19:21:05.908046 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:21:06 crc kubenswrapper[4754]: I0218 19:21:06.504779 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qztvz" event={"ID":"539505bb-b2d2-4adc-be1e-a95f73778a52","Type":"ContainerStarted","Data":"e6a1ae94302a207119ed192d541a32db8ec66de0c2cad06756def23e4186f870"} Feb 18 19:21:06 crc kubenswrapper[4754]: I0218 19:21:06.505103 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qztvz" event={"ID":"539505bb-b2d2-4adc-be1e-a95f73778a52","Type":"ContainerStarted","Data":"cbfb8a687a38f4df2098adb240a4541619b2d9155f1b807f0f6c960b84ea3e05"} Feb 18 19:21:07 crc kubenswrapper[4754]: I0218 19:21:07.530863 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qztvz" event={"ID":"539505bb-b2d2-4adc-be1e-a95f73778a52","Type":"ContainerStarted","Data":"fdd393f7c15b46b21c4e7cb691c238de60458a0b125f2d70b6f829f86c697116"} Feb 18 19:21:07 crc kubenswrapper[4754]: I0218 19:21:07.555692 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qztvz" podStartSLOduration=145.555671784 podStartE2EDuration="2m25.555671784s" podCreationTimestamp="2026-02-18 19:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:21:07.551549018 +0000 UTC m=+170.001961844" watchObservedRunningTime="2026-02-18 19:21:07.555671784 +0000 UTC m=+170.006084580" Feb 18 19:21:08 crc kubenswrapper[4754]: I0218 19:21:08.096778 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:21:08 crc kubenswrapper[4754]: I0218 19:21:08.096850 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:21:09 crc kubenswrapper[4754]: I0218 19:21:09.481250 4754 patch_prober.go:28] interesting pod/console-f9d7485db-72dh6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 19:21:09 crc kubenswrapper[4754]: I0218 19:21:09.481376 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-72dh6" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 19:21:09 crc kubenswrapper[4754]: I0218 19:21:09.870953 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nszgz" Feb 18 19:21:15 crc kubenswrapper[4754]: I0218 19:21:15.385445 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:21:16 crc kubenswrapper[4754]: I0218 19:21:16.742294 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:21:19 crc kubenswrapper[4754]: I0218 19:21:19.487616 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:21:19 crc kubenswrapper[4754]: I0218 19:21:19.495152 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:21:29 crc kubenswrapper[4754]: I0218 19:21:29.966218 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zg7hz" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.679279 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.679938 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5h7hm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6p9z2_openshift-marketplace(4699f1a8-9e55-49b6-a67f-f84bd256fa0f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.681585 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6p9z2" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.710477 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.710665 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxpdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5g8x2_openshift-marketplace(666f2f31-98d7-4fd3-ac3a-2a345ea089e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.711984 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5g8x2" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.728529 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6p9z2" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.735407 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.735543 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mqhjn_openshift-marketplace(c74046c9-1f25-4668-b742-abae28a18c9b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.736671 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mqhjn" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.775084 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.775522 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kfjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-74s7p_openshift-marketplace(54801380-5317-40df-b2c8-1a392650cc50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:21:31 crc kubenswrapper[4754]: E0218 19:21:31.776783 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-74s7p" podUID="54801380-5317-40df-b2c8-1a392650cc50" Feb 18 19:21:32 crc kubenswrapper[4754]: I0218 19:21:32.725442 4754 generic.go:334] "Generic (PLEG): container finished" podID="cabec9c1-d434-4382-87e9-c488658c02fe" containerID="5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc" exitCode=0 Feb 18 19:21:32 crc kubenswrapper[4754]: I0218 19:21:32.725591 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2vnz" event={"ID":"cabec9c1-d434-4382-87e9-c488658c02fe","Type":"ContainerDied","Data":"5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc"} Feb 18 19:21:32 crc kubenswrapper[4754]: I0218 19:21:32.728403 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerStarted","Data":"d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb"} Feb 18 19:21:32 crc kubenswrapper[4754]: I0218 19:21:32.731934 4754 generic.go:334] "Generic (PLEG): container finished" podID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerID="38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a" exitCode=0 Feb 18 19:21:32 crc kubenswrapper[4754]: I0218 19:21:32.732004 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9nlb" event={"ID":"a91e02b9-77f2-4adc-8255-ef6dca75c2cf","Type":"ContainerDied","Data":"38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a"} Feb 18 19:21:32 crc kubenswrapper[4754]: I0218 19:21:32.742320 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerStarted","Data":"944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33"} Feb 18 19:21:32 crc kubenswrapper[4754]: E0218 19:21:32.750569 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mqhjn" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" Feb 18 19:21:32 crc kubenswrapper[4754]: E0218 19:21:32.755378 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5g8x2" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" Feb 18 19:21:32 crc kubenswrapper[4754]: E0218 19:21:32.775501 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-74s7p" podUID="54801380-5317-40df-b2c8-1a392650cc50" Feb 18 19:21:33 crc kubenswrapper[4754]: I0218 19:21:33.750940 4754 generic.go:334] "Generic (PLEG): container finished" podID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerID="d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb" exitCode=0 Feb 18 19:21:33 crc kubenswrapper[4754]: I0218 19:21:33.751034 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerDied","Data":"d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb"} Feb 18 19:21:33 crc kubenswrapper[4754]: I0218 19:21:33.755790 4754 generic.go:334] "Generic (PLEG): container finished" podID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerID="944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33" exitCode=0 Feb 18 19:21:33 crc kubenswrapper[4754]: I0218 19:21:33.755874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerDied","Data":"944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33"} Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.265232 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:21:35 crc kubenswrapper[4754]: E0218 19:21:35.265927 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec35fd34-34c2-42e0-8e8d-4b210660e5ed" containerName="pruner" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.265942 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec35fd34-34c2-42e0-8e8d-4b210660e5ed" containerName="pruner" Feb 18 19:21:35 crc kubenswrapper[4754]: E0218 19:21:35.265964 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019d9051-4d5f-4083-bbb1-b9165ecc2dd0" containerName="pruner" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.265971 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="019d9051-4d5f-4083-bbb1-b9165ecc2dd0" containerName="pruner" Feb 18 19:21:35 crc kubenswrapper[4754]: E0218 19:21:35.265984 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" containerName="collect-profiles" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.265993 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" containerName="collect-profiles" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.266124 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" containerName="collect-profiles" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.266167 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="019d9051-4d5f-4083-bbb1-b9165ecc2dd0" containerName="pruner" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.266185 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec35fd34-34c2-42e0-8e8d-4b210660e5ed" containerName="pruner" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.266755 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.268213 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.278691 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.279101 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.383462 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.383822 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.485449 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.485820 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.485614 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.509706 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:35 crc kubenswrapper[4754]: I0218 19:21:35.595179 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:36 crc kubenswrapper[4754]: I0218 19:21:36.454647 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:21:36 crc kubenswrapper[4754]: W0218 19:21:36.468959 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod786c7c56_b337_44f4_b65e_c85dd99b6cd4.slice/crio-7a2170c3b48bd24864b8b1d7f3f81833f096930f6e73486d4177b42dcb0570ef WatchSource:0}: Error finding container 7a2170c3b48bd24864b8b1d7f3f81833f096930f6e73486d4177b42dcb0570ef: Status 404 returned error can't find the container with id 7a2170c3b48bd24864b8b1d7f3f81833f096930f6e73486d4177b42dcb0570ef Feb 18 19:21:36 crc kubenswrapper[4754]: I0218 19:21:36.779002 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"786c7c56-b337-44f4-b65e-c85dd99b6cd4","Type":"ContainerStarted","Data":"7a2170c3b48bd24864b8b1d7f3f81833f096930f6e73486d4177b42dcb0570ef"} Feb 18 19:21:36 crc kubenswrapper[4754]: I0218 19:21:36.781916 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9nlb" event={"ID":"a91e02b9-77f2-4adc-8255-ef6dca75c2cf","Type":"ContainerStarted","Data":"f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923"} Feb 18 19:21:36 crc kubenswrapper[4754]: I0218 19:21:36.803336 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x9nlb" podStartSLOduration=2.955633766 podStartE2EDuration="40.803296384s" podCreationTimestamp="2026-02-18 19:20:56 +0000 UTC" firstStartedPulling="2026-02-18 19:20:58.106993895 +0000 UTC m=+160.557406701" lastFinishedPulling="2026-02-18 19:21:35.954656523 +0000 UTC m=+198.405069319" observedRunningTime="2026-02-18 19:21:36.799996231 +0000 UTC m=+199.250409027" watchObservedRunningTime="2026-02-18 19:21:36.803296384 +0000 UTC m=+199.253709180" Feb 18 19:21:37 crc kubenswrapper[4754]: I0218 19:21:37.129087 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:21:37 crc kubenswrapper[4754]: I0218 19:21:37.129166 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.096713 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.097081 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.802990 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerStarted","Data":"99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da"} Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.805013 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"786c7c56-b337-44f4-b65e-c85dd99b6cd4","Type":"ContainerStarted","Data":"019b68de05137f6aaa38143d9e9ff298fdf3cf79436d780eb64eabc1b3be672f"} Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.810869 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2vnz" event={"ID":"cabec9c1-d434-4382-87e9-c488658c02fe","Type":"ContainerStarted","Data":"2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9"} Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.824266 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kqd9p" podStartSLOduration=2.554875372 podStartE2EDuration="39.824246227s" podCreationTimestamp="2026-02-18 19:20:59 +0000 UTC" firstStartedPulling="2026-02-18 19:21:01.297112168 +0000 UTC m=+163.747524964" lastFinishedPulling="2026-02-18 19:21:38.566483023 +0000 UTC m=+201.016895819" observedRunningTime="2026-02-18 19:21:38.820758728 +0000 UTC m=+201.271171524" watchObservedRunningTime="2026-02-18 19:21:38.824246227 +0000 UTC m=+201.274659023" Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.839931 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n4mf2" podStartSLOduration=1.5119287479999999 podStartE2EDuration="38.839910476s" podCreationTimestamp="2026-02-18 19:21:00 +0000 UTC" firstStartedPulling="2026-02-18 19:21:01.276205282 +0000 UTC m=+163.726618078" lastFinishedPulling="2026-02-18 19:21:38.60418701 +0000 UTC m=+201.054599806" observedRunningTime="2026-02-18 19:21:38.835853286 +0000 UTC m=+201.286266092" watchObservedRunningTime="2026-02-18 19:21:38.839910476 +0000 UTC m=+201.290323272" Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.859783 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k2vnz" podStartSLOduration=3.04273325 podStartE2EDuration="40.859756968s" podCreationTimestamp="2026-02-18 19:20:58 +0000 UTC" firstStartedPulling="2026-02-18 19:21:00.238182165 +0000 UTC m=+162.688594961" lastFinishedPulling="2026-02-18 19:21:38.055205853 +0000 UTC m=+200.505618679" observedRunningTime="2026-02-18 19:21:38.855647867 +0000 UTC m=+201.306060663" watchObservedRunningTime="2026-02-18 19:21:38.859756968 +0000 UTC m=+201.310169764" Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.893966 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.893667175 podStartE2EDuration="3.893667175s" podCreationTimestamp="2026-02-18 19:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:21:38.876277276 +0000 UTC m=+201.326690072" watchObservedRunningTime="2026-02-18 19:21:38.893667175 +0000 UTC m=+201.344079971" Feb 18 19:21:38 crc kubenswrapper[4754]: I0218 19:21:38.925934 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-x9nlb" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="registry-server" probeResult="failure" output=< Feb 18 19:21:38 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:21:38 crc kubenswrapper[4754]: > Feb 18 19:21:39 crc kubenswrapper[4754]: I0218 19:21:39.040022 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:21:39 crc kubenswrapper[4754]: I0218 19:21:39.040099 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:21:39 crc kubenswrapper[4754]: I0218 19:21:39.821369 4754 generic.go:334] "Generic (PLEG): container finished" podID="786c7c56-b337-44f4-b65e-c85dd99b6cd4" containerID="019b68de05137f6aaa38143d9e9ff298fdf3cf79436d780eb64eabc1b3be672f" exitCode=0 Feb 18 19:21:39 crc kubenswrapper[4754]: I0218 19:21:39.821887 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"786c7c56-b337-44f4-b65e-c85dd99b6cd4","Type":"ContainerDied","Data":"019b68de05137f6aaa38143d9e9ff298fdf3cf79436d780eb64eabc1b3be672f"} Feb 18 19:21:39 crc kubenswrapper[4754]: I0218 19:21:39.832221 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerStarted","Data":"4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f"} Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.090033 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k2vnz" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="registry-server" probeResult="failure" output=< Feb 18 19:21:40 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:21:40 crc kubenswrapper[4754]: > Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.180712 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.180796 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.451799 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.451895 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.653208 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.654269 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.668330 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.797430 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.798023 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-var-lock\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.798124 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3c10de6-a522-4975-9705-210bf58415f8-kube-api-access\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.899048 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.899107 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-var-lock\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.899174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3c10de6-a522-4975-9705-210bf58415f8-kube-api-access\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.899211 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.899385 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-var-lock\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.921675 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3c10de6-a522-4975-9705-210bf58415f8-kube-api-access\") pod \"installer-9-crc\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:40 crc kubenswrapper[4754]: I0218 19:21:40.972910 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.163766 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.222005 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqd9p" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="registry-server" probeResult="failure" output=< Feb 18 19:21:41 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:21:41 crc kubenswrapper[4754]: > Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.305531 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kubelet-dir\") pod \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.305599 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kube-api-access\") pod \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\" (UID: \"786c7c56-b337-44f4-b65e-c85dd99b6cd4\") " Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.305722 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "786c7c56-b337-44f4-b65e-c85dd99b6cd4" (UID: "786c7c56-b337-44f4-b65e-c85dd99b6cd4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.305998 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.312759 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "786c7c56-b337-44f4-b65e-c85dd99b6cd4" (UID: "786c7c56-b337-44f4-b65e-c85dd99b6cd4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.388918 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.407525 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/786c7c56-b337-44f4-b65e-c85dd99b6cd4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.494726 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4mf2" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="registry-server" probeResult="failure" output=< Feb 18 19:21:41 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:21:41 crc kubenswrapper[4754]: > Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.845560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e3c10de6-a522-4975-9705-210bf58415f8","Type":"ContainerStarted","Data":"ec08db060729dab221db8f4ef55b3c5b8bb8382da2e530531a3488f62997a41a"} Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.847531 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"786c7c56-b337-44f4-b65e-c85dd99b6cd4","Type":"ContainerDied","Data":"7a2170c3b48bd24864b8b1d7f3f81833f096930f6e73486d4177b42dcb0570ef"} Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.847597 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a2170c3b48bd24864b8b1d7f3f81833f096930f6e73486d4177b42dcb0570ef" Feb 18 19:21:41 crc kubenswrapper[4754]: I0218 19:21:41.847742 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:21:42 crc kubenswrapper[4754]: I0218 19:21:42.856315 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e3c10de6-a522-4975-9705-210bf58415f8","Type":"ContainerStarted","Data":"bb710091fd9fc7d0cc5a5e57eb436fd045f7d6d266d9caca01316b044a52babb"} Feb 18 19:21:42 crc kubenswrapper[4754]: I0218 19:21:42.878362 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.878333751 podStartE2EDuration="2.878333751s" podCreationTimestamp="2026-02-18 19:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:21:42.873572157 +0000 UTC m=+205.323984953" watchObservedRunningTime="2026-02-18 19:21:42.878333751 +0000 UTC m=+205.328746557" Feb 18 19:21:47 crc kubenswrapper[4754]: I0218 19:21:47.208758 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:21:47 crc kubenswrapper[4754]: I0218 19:21:47.263108 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:21:47 crc kubenswrapper[4754]: I0218 19:21:47.903251 4754 generic.go:334] "Generic (PLEG): container finished" podID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerID="cc6f4d69098b70e666cb4ececede326948abaf7100680f17dca151c75dc6b18d" exitCode=0 Feb 18 19:21:47 crc kubenswrapper[4754]: I0218 19:21:47.903379 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8x2" event={"ID":"666f2f31-98d7-4fd3-ac3a-2a345ea089e2","Type":"ContainerDied","Data":"cc6f4d69098b70e666cb4ececede326948abaf7100680f17dca151c75dc6b18d"} Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.088287 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.135821 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.914223 4754 generic.go:334] "Generic (PLEG): container finished" podID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerID="58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b" exitCode=0 Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.914527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p9z2" event={"ID":"4699f1a8-9e55-49b6-a67f-f84bd256fa0f","Type":"ContainerDied","Data":"58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b"} Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.915604 4754 generic.go:334] "Generic (PLEG): container finished" podID="54801380-5317-40df-b2c8-1a392650cc50" containerID="9c4f72f69645103255b63270cadf5866814176bc0a177f3a14059c6b114db4b3" exitCode=0 Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.915676 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74s7p" event={"ID":"54801380-5317-40df-b2c8-1a392650cc50","Type":"ContainerDied","Data":"9c4f72f69645103255b63270cadf5866814176bc0a177f3a14059c6b114db4b3"} Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.916903 4754 generic.go:334] "Generic (PLEG): container finished" podID="c74046c9-1f25-4668-b742-abae28a18c9b" containerID="7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286" exitCode=0 Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.916948 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhjn" event={"ID":"c74046c9-1f25-4668-b742-abae28a18c9b","Type":"ContainerDied","Data":"7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286"} Feb 18 19:21:49 crc kubenswrapper[4754]: I0218 19:21:49.920830 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8x2" event={"ID":"666f2f31-98d7-4fd3-ac3a-2a345ea089e2","Type":"ContainerStarted","Data":"3f146aa54a104b67221e9c390fff7119a0966ab1a4f5c902c6eb8dacb2e82bdf"} Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.010435 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5g8x2" podStartSLOduration=3.068836664 podStartE2EDuration="51.010413046s" podCreationTimestamp="2026-02-18 19:20:59 +0000 UTC" firstStartedPulling="2026-02-18 19:21:01.26755419 +0000 UTC m=+163.717966986" lastFinishedPulling="2026-02-18 19:21:49.209130572 +0000 UTC m=+211.659543368" observedRunningTime="2026-02-18 19:21:50.009325528 +0000 UTC m=+212.459738344" watchObservedRunningTime="2026-02-18 19:21:50.010413046 +0000 UTC m=+212.460825832" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.225617 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.265759 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.502527 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.555862 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.928355 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74s7p" event={"ID":"54801380-5317-40df-b2c8-1a392650cc50","Type":"ContainerStarted","Data":"c525697f7777834677c9fc652b76ee02b032bca1005811f7d0fc2a4ce04afb2c"} Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.930588 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhjn" event={"ID":"c74046c9-1f25-4668-b742-abae28a18c9b","Type":"ContainerStarted","Data":"0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8"} Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.932319 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p9z2" event={"ID":"4699f1a8-9e55-49b6-a67f-f84bd256fa0f","Type":"ContainerStarted","Data":"2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516"} Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.950613 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-74s7p" podStartSLOduration=1.5932331400000002 podStartE2EDuration="53.950590994s" podCreationTimestamp="2026-02-18 19:20:57 +0000 UTC" firstStartedPulling="2026-02-18 19:20:58.123399945 +0000 UTC m=+160.573812741" lastFinishedPulling="2026-02-18 19:21:50.480757799 +0000 UTC m=+212.931170595" observedRunningTime="2026-02-18 19:21:50.950543383 +0000 UTC m=+213.400956179" watchObservedRunningTime="2026-02-18 19:21:50.950590994 +0000 UTC m=+213.401003790" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.968925 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mqhjn" podStartSLOduration=2.531472587 podStartE2EDuration="54.968904614s" podCreationTimestamp="2026-02-18 19:20:56 +0000 UTC" firstStartedPulling="2026-02-18 19:20:58.102652132 +0000 UTC m=+160.553064928" lastFinishedPulling="2026-02-18 19:21:50.540084159 +0000 UTC m=+212.990496955" observedRunningTime="2026-02-18 19:21:50.966106007 +0000 UTC m=+213.416518803" watchObservedRunningTime="2026-02-18 19:21:50.968904614 +0000 UTC m=+213.419317410" Feb 18 19:21:50 crc kubenswrapper[4754]: I0218 19:21:50.993738 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6p9z2" podStartSLOduration=2.766811299 podStartE2EDuration="54.993718578s" podCreationTimestamp="2026-02-18 19:20:56 +0000 UTC" firstStartedPulling="2026-02-18 19:20:58.11252321 +0000 UTC m=+160.562936016" lastFinishedPulling="2026-02-18 19:21:50.339430499 +0000 UTC m=+212.789843295" observedRunningTime="2026-02-18 19:21:50.991169349 +0000 UTC m=+213.441582165" watchObservedRunningTime="2026-02-18 19:21:50.993718578 +0000 UTC m=+213.444131374" Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.249705 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4mf2"] Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.250577 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n4mf2" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="registry-server" containerID="cri-o://4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f" gracePeriod=2 Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.706769 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.909570 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-utilities\") pod \"825e85f9-84d0-4bc6-b250-29365ffbbc38\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.909759 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plmgw\" (UniqueName: \"kubernetes.io/projected/825e85f9-84d0-4bc6-b250-29365ffbbc38-kube-api-access-plmgw\") pod \"825e85f9-84d0-4bc6-b250-29365ffbbc38\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.909804 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-catalog-content\") pod \"825e85f9-84d0-4bc6-b250-29365ffbbc38\" (UID: \"825e85f9-84d0-4bc6-b250-29365ffbbc38\") " Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.910578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-utilities" (OuterVolumeSpecName: "utilities") pod "825e85f9-84d0-4bc6-b250-29365ffbbc38" (UID: "825e85f9-84d0-4bc6-b250-29365ffbbc38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.917954 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825e85f9-84d0-4bc6-b250-29365ffbbc38-kube-api-access-plmgw" (OuterVolumeSpecName: "kube-api-access-plmgw") pod "825e85f9-84d0-4bc6-b250-29365ffbbc38" (UID: "825e85f9-84d0-4bc6-b250-29365ffbbc38"). InnerVolumeSpecName "kube-api-access-plmgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.963909 4754 generic.go:334] "Generic (PLEG): container finished" podID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerID="4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f" exitCode=0 Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.964003 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerDied","Data":"4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f"} Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.964070 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4mf2" Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.964856 4754 scope.go:117] "RemoveContainer" containerID="4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f" Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.964727 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4mf2" event={"ID":"825e85f9-84d0-4bc6-b250-29365ffbbc38","Type":"ContainerDied","Data":"388a5cfe8c234200bc1cc60a65d28a9edef8ca783945b063db372214bbd0353b"} Feb 18 19:21:54 crc kubenswrapper[4754]: I0218 19:21:54.993337 4754 scope.go:117] "RemoveContainer" containerID="d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.011387 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plmgw\" (UniqueName: \"kubernetes.io/projected/825e85f9-84d0-4bc6-b250-29365ffbbc38-kube-api-access-plmgw\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.011434 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.033905 4754 scope.go:117] "RemoveContainer" containerID="8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.059526 4754 scope.go:117] "RemoveContainer" containerID="4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f" Feb 18 19:21:55 crc kubenswrapper[4754]: E0218 19:21:55.060191 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f\": container with ID starting with 4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f not found: ID does not exist" containerID="4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.060251 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f"} err="failed to get container status \"4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f\": rpc error: code = NotFound desc = could not find container \"4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f\": container with ID starting with 4b3761bd1af2645730ae03f0fa6f7731a131ce8de59f86e17126c1971badfd2f not found: ID does not exist" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.060309 4754 scope.go:117] "RemoveContainer" containerID="d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb" Feb 18 19:21:55 crc kubenswrapper[4754]: E0218 19:21:55.060854 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb\": container with ID starting with d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb not found: ID does not exist" containerID="d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.061042 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb"} err="failed to get container status \"d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb\": rpc error: code = NotFound desc = could not find container \"d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb\": container with ID starting with d503f0269ba98b768ffca5d38b377ff3c4ecf20fc8a21001c5c11cc1e23aebfb not found: ID does not exist" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.061422 4754 scope.go:117] "RemoveContainer" containerID="8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5" Feb 18 19:21:55 crc kubenswrapper[4754]: E0218 19:21:55.062017 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5\": container with ID starting with 8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5 not found: ID does not exist" containerID="8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.062048 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5"} err="failed to get container status \"8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5\": rpc error: code = NotFound desc = could not find container \"8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5\": container with ID starting with 8fd4d284b5d577131ddca53f4beb4b13b2a95d9728b26e2211577956d0b361f5 not found: ID does not exist" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.121839 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "825e85f9-84d0-4bc6-b250-29365ffbbc38" (UID: "825e85f9-84d0-4bc6-b250-29365ffbbc38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.215595 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825e85f9-84d0-4bc6-b250-29365ffbbc38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.315981 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4mf2"] Feb 18 19:21:55 crc kubenswrapper[4754]: I0218 19:21:55.319220 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n4mf2"] Feb 18 19:21:56 crc kubenswrapper[4754]: I0218 19:21:56.223752 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" path="/var/lib/kubelet/pods/825e85f9-84d0-4bc6-b250-29365ffbbc38/volumes" Feb 18 19:21:56 crc kubenswrapper[4754]: I0218 19:21:56.840495 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:21:56 crc kubenswrapper[4754]: I0218 19:21:56.840963 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:21:56 crc kubenswrapper[4754]: I0218 19:21:56.904905 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.024866 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.258864 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.258932 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.305972 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.466003 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.466069 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:21:57 crc kubenswrapper[4754]: I0218 19:21:57.502938 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:21:58 crc kubenswrapper[4754]: I0218 19:21:58.028807 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:21:58 crc kubenswrapper[4754]: I0218 19:21:58.046117 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:21:59 crc kubenswrapper[4754]: I0218 19:21:59.248636 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqhjn"] Feb 18 19:21:59 crc kubenswrapper[4754]: I0218 19:21:59.457114 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:21:59 crc kubenswrapper[4754]: I0218 19:21:59.457556 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:21:59 crc kubenswrapper[4754]: I0218 19:21:59.522077 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:21:59 crc kubenswrapper[4754]: I0218 19:21:59.994057 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mqhjn" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="registry-server" containerID="cri-o://0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8" gracePeriod=2 Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.057437 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.247757 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74s7p"] Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.248491 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-74s7p" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="registry-server" containerID="cri-o://c525697f7777834677c9fc652b76ee02b032bca1005811f7d0fc2a4ce04afb2c" gracePeriod=2 Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.405273 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.510420 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-utilities\") pod \"c74046c9-1f25-4668-b742-abae28a18c9b\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.510509 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5bfl\" (UniqueName: \"kubernetes.io/projected/c74046c9-1f25-4668-b742-abae28a18c9b-kube-api-access-n5bfl\") pod \"c74046c9-1f25-4668-b742-abae28a18c9b\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.510559 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-catalog-content\") pod \"c74046c9-1f25-4668-b742-abae28a18c9b\" (UID: \"c74046c9-1f25-4668-b742-abae28a18c9b\") " Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.511891 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-utilities" (OuterVolumeSpecName: "utilities") pod "c74046c9-1f25-4668-b742-abae28a18c9b" (UID: "c74046c9-1f25-4668-b742-abae28a18c9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.524455 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74046c9-1f25-4668-b742-abae28a18c9b-kube-api-access-n5bfl" (OuterVolumeSpecName: "kube-api-access-n5bfl") pod "c74046c9-1f25-4668-b742-abae28a18c9b" (UID: "c74046c9-1f25-4668-b742-abae28a18c9b"). InnerVolumeSpecName "kube-api-access-n5bfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.563123 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c74046c9-1f25-4668-b742-abae28a18c9b" (UID: "c74046c9-1f25-4668-b742-abae28a18c9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.611845 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.612121 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5bfl\" (UniqueName: \"kubernetes.io/projected/c74046c9-1f25-4668-b742-abae28a18c9b-kube-api-access-n5bfl\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:00 crc kubenswrapper[4754]: I0218 19:22:00.612298 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74046c9-1f25-4668-b742-abae28a18c9b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.000053 4754 generic.go:334] "Generic (PLEG): container finished" podID="54801380-5317-40df-b2c8-1a392650cc50" containerID="c525697f7777834677c9fc652b76ee02b032bca1005811f7d0fc2a4ce04afb2c" exitCode=0 Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.000119 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74s7p" event={"ID":"54801380-5317-40df-b2c8-1a392650cc50","Type":"ContainerDied","Data":"c525697f7777834677c9fc652b76ee02b032bca1005811f7d0fc2a4ce04afb2c"} Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.002027 4754 generic.go:334] "Generic (PLEG): container finished" podID="c74046c9-1f25-4668-b742-abae28a18c9b" containerID="0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8" exitCode=0 Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.002104 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhjn" event={"ID":"c74046c9-1f25-4668-b742-abae28a18c9b","Type":"ContainerDied","Data":"0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8"} Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.002253 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mqhjn" event={"ID":"c74046c9-1f25-4668-b742-abae28a18c9b","Type":"ContainerDied","Data":"3e7a6bf34cba4fa6e4e61d24143faa065045eec71128b4175ed7379a9f427623"} Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.002292 4754 scope.go:117] "RemoveContainer" containerID="0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.002264 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mqhjn" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.020457 4754 scope.go:117] "RemoveContainer" containerID="7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.048388 4754 scope.go:117] "RemoveContainer" containerID="0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.052227 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mqhjn"] Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.058441 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mqhjn"] Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.078523 4754 scope.go:117] "RemoveContainer" containerID="0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8" Feb 18 19:22:01 crc kubenswrapper[4754]: E0218 19:22:01.079350 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8\": container with ID starting with 0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8 not found: ID does not exist" containerID="0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.079417 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8"} err="failed to get container status \"0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8\": rpc error: code = NotFound desc = could not find container \"0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8\": container with ID starting with 0ce156414185432719daf8098073b440743f22832634e5a530c7a6b29aa4aef8 not found: ID does not exist" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.079457 4754 scope.go:117] "RemoveContainer" containerID="7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286" Feb 18 19:22:01 crc kubenswrapper[4754]: E0218 19:22:01.079949 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286\": container with ID starting with 7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286 not found: ID does not exist" containerID="7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.080017 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286"} err="failed to get container status \"7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286\": rpc error: code = NotFound desc = could not find container \"7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286\": container with ID starting with 7875cf9c5a3b0d612181041a87da207367d86a05d01564e77299453a51696286 not found: ID does not exist" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.080065 4754 scope.go:117] "RemoveContainer" containerID="0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416" Feb 18 19:22:01 crc kubenswrapper[4754]: E0218 19:22:01.080748 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416\": container with ID starting with 0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416 not found: ID does not exist" containerID="0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.080791 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416"} err="failed to get container status \"0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416\": rpc error: code = NotFound desc = could not find container \"0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416\": container with ID starting with 0240911a4e0b2eda00738d321ad958e9ed2865b904610c9d62b97362af397416 not found: ID does not exist" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.305222 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.321750 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kfjn\" (UniqueName: \"kubernetes.io/projected/54801380-5317-40df-b2c8-1a392650cc50-kube-api-access-4kfjn\") pod \"54801380-5317-40df-b2c8-1a392650cc50\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.322180 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-catalog-content\") pod \"54801380-5317-40df-b2c8-1a392650cc50\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.322417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-utilities\") pod \"54801380-5317-40df-b2c8-1a392650cc50\" (UID: \"54801380-5317-40df-b2c8-1a392650cc50\") " Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.323309 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-utilities" (OuterVolumeSpecName: "utilities") pod "54801380-5317-40df-b2c8-1a392650cc50" (UID: "54801380-5317-40df-b2c8-1a392650cc50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.332041 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54801380-5317-40df-b2c8-1a392650cc50-kube-api-access-4kfjn" (OuterVolumeSpecName: "kube-api-access-4kfjn") pod "54801380-5317-40df-b2c8-1a392650cc50" (UID: "54801380-5317-40df-b2c8-1a392650cc50"). InnerVolumeSpecName "kube-api-access-4kfjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.377804 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54801380-5317-40df-b2c8-1a392650cc50" (UID: "54801380-5317-40df-b2c8-1a392650cc50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.424645 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kfjn\" (UniqueName: \"kubernetes.io/projected/54801380-5317-40df-b2c8-1a392650cc50-kube-api-access-4kfjn\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.424688 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:01 crc kubenswrapper[4754]: I0218 19:22:01.424701 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54801380-5317-40df-b2c8-1a392650cc50-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.013502 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74s7p" event={"ID":"54801380-5317-40df-b2c8-1a392650cc50","Type":"ContainerDied","Data":"1e7c3f7523f55f4d09e68dd3acd775a206cc23c7162cfe46d2e8e497374d05a3"} Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.013558 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74s7p" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.013602 4754 scope.go:117] "RemoveContainer" containerID="c525697f7777834677c9fc652b76ee02b032bca1005811f7d0fc2a4ce04afb2c" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.048100 4754 scope.go:117] "RemoveContainer" containerID="9c4f72f69645103255b63270cadf5866814176bc0a177f3a14059c6b114db4b3" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.069753 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74s7p"] Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.076533 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-74s7p"] Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.089916 4754 scope.go:117] "RemoveContainer" containerID="f8d08d97a15d9d2da9693fa9bd8df322a8c13121e4f554b3a1e90259da7232c6" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.222786 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54801380-5317-40df-b2c8-1a392650cc50" path="/var/lib/kubelet/pods/54801380-5317-40df-b2c8-1a392650cc50/volumes" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.224045 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" path="/var/lib/kubelet/pods/c74046c9-1f25-4668-b742-abae28a18c9b/volumes" Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.645881 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8x2"] Feb 18 19:22:02 crc kubenswrapper[4754]: I0218 19:22:02.646229 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5g8x2" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="registry-server" containerID="cri-o://3f146aa54a104b67221e9c390fff7119a0966ab1a4f5c902c6eb8dacb2e82bdf" gracePeriod=2 Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.028184 4754 generic.go:334] "Generic (PLEG): container finished" podID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerID="3f146aa54a104b67221e9c390fff7119a0966ab1a4f5c902c6eb8dacb2e82bdf" exitCode=0 Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.028297 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8x2" event={"ID":"666f2f31-98d7-4fd3-ac3a-2a345ea089e2","Type":"ContainerDied","Data":"3f146aa54a104b67221e9c390fff7119a0966ab1a4f5c902c6eb8dacb2e82bdf"} Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.060058 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.254991 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-catalog-content\") pod \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.255236 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-utilities\") pod \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.255388 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxpdb\" (UniqueName: \"kubernetes.io/projected/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-kube-api-access-cxpdb\") pod \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\" (UID: \"666f2f31-98d7-4fd3-ac3a-2a345ea089e2\") " Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.256993 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-utilities" (OuterVolumeSpecName: "utilities") pod "666f2f31-98d7-4fd3-ac3a-2a345ea089e2" (UID: "666f2f31-98d7-4fd3-ac3a-2a345ea089e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.261104 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-kube-api-access-cxpdb" (OuterVolumeSpecName: "kube-api-access-cxpdb") pod "666f2f31-98d7-4fd3-ac3a-2a345ea089e2" (UID: "666f2f31-98d7-4fd3-ac3a-2a345ea089e2"). InnerVolumeSpecName "kube-api-access-cxpdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.278399 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "666f2f31-98d7-4fd3-ac3a-2a345ea089e2" (UID: "666f2f31-98d7-4fd3-ac3a-2a345ea089e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.357382 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.357449 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:03 crc kubenswrapper[4754]: I0218 19:22:03.357469 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxpdb\" (UniqueName: \"kubernetes.io/projected/666f2f31-98d7-4fd3-ac3a-2a345ea089e2-kube-api-access-cxpdb\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.045852 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g8x2" event={"ID":"666f2f31-98d7-4fd3-ac3a-2a345ea089e2","Type":"ContainerDied","Data":"e68d136738505ede56f9b84d789220d5e8c6968985431c30be4e7ea5850f384c"} Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.046010 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g8x2" Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.046501 4754 scope.go:117] "RemoveContainer" containerID="3f146aa54a104b67221e9c390fff7119a0966ab1a4f5c902c6eb8dacb2e82bdf" Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.089497 4754 scope.go:117] "RemoveContainer" containerID="cc6f4d69098b70e666cb4ececede326948abaf7100680f17dca151c75dc6b18d" Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.098932 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8x2"] Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.118424 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g8x2"] Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.132098 4754 scope.go:117] "RemoveContainer" containerID="c0cb6ca0f73048ccda2489a3386299d9782981150a5db571c52205eb54543567" Feb 18 19:22:04 crc kubenswrapper[4754]: I0218 19:22:04.224266 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" path="/var/lib/kubelet/pods/666f2f31-98d7-4fd3-ac3a-2a345ea089e2/volumes" Feb 18 19:22:08 crc kubenswrapper[4754]: I0218 19:22:08.096593 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:22:08 crc kubenswrapper[4754]: I0218 19:22:08.097050 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:22:08 crc kubenswrapper[4754]: I0218 19:22:08.097125 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:22:08 crc kubenswrapper[4754]: I0218 19:22:08.098064 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:22:08 crc kubenswrapper[4754]: I0218 19:22:08.098168 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641" gracePeriod=600 Feb 18 19:22:08 crc kubenswrapper[4754]: I0218 19:22:08.374579 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lt44t"] Feb 18 19:22:09 crc kubenswrapper[4754]: I0218 19:22:09.091508 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641" exitCode=0 Feb 18 19:22:09 crc kubenswrapper[4754]: I0218 19:22:09.091593 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641"} Feb 18 19:22:09 crc kubenswrapper[4754]: I0218 19:22:09.091954 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"15a5a6adf9ccd125edeebc5ca9a6166993061dd39a65b3a2573a64c360c0c83d"} Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.001753 4754 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.003563 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585" gracePeriod=15 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.003656 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad" gracePeriod=15 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.003656 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175" gracePeriod=15 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.003724 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f" gracePeriod=15 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.003746 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9" gracePeriod=15 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.006745 4754 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007139 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007197 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007214 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007228 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007247 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007258 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007278 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007291 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007309 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007322 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007338 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007350 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007361 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007373 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007390 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007402 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007421 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786c7c56-b337-44f4-b65e-c85dd99b6cd4" containerName="pruner" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007431 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="786c7c56-b337-44f4-b65e-c85dd99b6cd4" containerName="pruner" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007446 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007459 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007476 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007492 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007507 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007520 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007534 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007545 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007564 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007575 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007592 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007604 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007619 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007630 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007644 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007657 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007673 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007684 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007698 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007710 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007726 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007737 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="extract-utilities" Feb 18 19:22:20 crc kubenswrapper[4754]: E0218 19:22:20.007750 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007761 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="extract-content" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007933 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="786c7c56-b337-44f4-b65e-c85dd99b6cd4" containerName="pruner" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007952 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="54801380-5317-40df-b2c8-1a392650cc50" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007966 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007983 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.007996 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74046c9-1f25-4668-b742-abae28a18c9b" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008010 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008027 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008041 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="825e85f9-84d0-4bc6-b250-29365ffbbc38" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008054 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="666f2f31-98d7-4fd3-ac3a-2a345ea089e2" containerName="registry-server" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008068 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008079 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.008392 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.011541 4754 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.016585 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.018693 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.018857 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.018921 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.018973 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.019032 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.023986 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121119 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121286 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121353 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121402 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121449 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121508 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.121657 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.122992 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.123074 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.123105 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.123128 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.123170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.180336 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.182314 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.183252 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f" exitCode=0 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.183289 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad" exitCode=0 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.183303 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175" exitCode=0 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.183317 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9" exitCode=2 Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.183367 4754 scope.go:117] "RemoveContainer" containerID="92c7b173ae0bd54df41d5900ead8b9610ec5132bd91260b14e3d7ba8dc7d5459" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.222776 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.222893 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.222925 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.223040 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.223563 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:20 crc kubenswrapper[4754]: I0218 19:22:20.223691 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:21 crc kubenswrapper[4754]: I0218 19:22:21.193227 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:22:21 crc kubenswrapper[4754]: I0218 19:22:21.196427 4754 generic.go:334] "Generic (PLEG): container finished" podID="e3c10de6-a522-4975-9705-210bf58415f8" containerID="bb710091fd9fc7d0cc5a5e57eb436fd045f7d6d266d9caca01316b044a52babb" exitCode=0 Feb 18 19:22:21 crc kubenswrapper[4754]: I0218 19:22:21.196479 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e3c10de6-a522-4975-9705-210bf58415f8","Type":"ContainerDied","Data":"bb710091fd9fc7d0cc5a5e57eb436fd045f7d6d266d9caca01316b044a52babb"} Feb 18 19:22:21 crc kubenswrapper[4754]: I0218 19:22:21.197390 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.457336 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.458275 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.459227 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.459468 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.461301 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.461605 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.461817 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579184 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579236 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579311 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-var-lock\") pod \"e3c10de6-a522-4975-9705-210bf58415f8\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579362 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3c10de6-a522-4975-9705-210bf58415f8-kube-api-access\") pod \"e3c10de6-a522-4975-9705-210bf58415f8\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579389 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-kubelet-dir\") pod \"e3c10de6-a522-4975-9705-210bf58415f8\" (UID: \"e3c10de6-a522-4975-9705-210bf58415f8\") " Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579390 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579420 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579447 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e3c10de6-a522-4975-9705-210bf58415f8" (UID: "e3c10de6-a522-4975-9705-210bf58415f8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579475 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579508 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579525 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-var-lock" (OuterVolumeSpecName: "var-lock") pod "e3c10de6-a522-4975-9705-210bf58415f8" (UID: "e3c10de6-a522-4975-9705-210bf58415f8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579631 4754 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579651 4754 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579663 4754 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.579675 4754 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.587458 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c10de6-a522-4975-9705-210bf58415f8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3c10de6-a522-4975-9705-210bf58415f8" (UID: "e3c10de6-a522-4975-9705-210bf58415f8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.681934 4754 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3c10de6-a522-4975-9705-210bf58415f8-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:22 crc kubenswrapper[4754]: I0218 19:22:22.681973 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3c10de6-a522-4975-9705-210bf58415f8-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.212511 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.213904 4754 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585" exitCode=0 Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.214004 4754 scope.go:117] "RemoveContainer" containerID="e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.214205 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.219612 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e3c10de6-a522-4975-9705-210bf58415f8","Type":"ContainerDied","Data":"ec08db060729dab221db8f4ef55b3c5b8bb8382da2e530531a3488f62997a41a"} Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.219657 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec08db060729dab221db8f4ef55b3c5b8bb8382da2e530531a3488f62997a41a" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.219692 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.238451 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.238976 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.239742 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.240092 4754 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.242077 4754 scope.go:117] "RemoveContainer" containerID="09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.259424 4754 scope.go:117] "RemoveContainer" containerID="55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.276390 4754 scope.go:117] "RemoveContainer" containerID="ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.297769 4754 scope.go:117] "RemoveContainer" containerID="c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.314546 4754 scope.go:117] "RemoveContainer" containerID="5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.338295 4754 scope.go:117] "RemoveContainer" containerID="e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f" Feb 18 19:22:23 crc kubenswrapper[4754]: E0218 19:22:23.338858 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\": container with ID starting with e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f not found: ID does not exist" containerID="e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.338895 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f"} err="failed to get container status \"e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\": rpc error: code = NotFound desc = could not find container \"e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f\": container with ID starting with e7f42d4d30621b60e21be68a711240a5b297d06a164e70cc2ff36ef1ec5f5c5f not found: ID does not exist" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.338920 4754 scope.go:117] "RemoveContainer" containerID="09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad" Feb 18 19:22:23 crc kubenswrapper[4754]: E0218 19:22:23.339240 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\": container with ID starting with 09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad not found: ID does not exist" containerID="09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.339293 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad"} err="failed to get container status \"09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\": rpc error: code = NotFound desc = could not find container \"09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad\": container with ID starting with 09a10609ef840b65075c287b17fbdc19af469c01b71e044f6beb1aba5b6652ad not found: ID does not exist" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.339325 4754 scope.go:117] "RemoveContainer" containerID="55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175" Feb 18 19:22:23 crc kubenswrapper[4754]: E0218 19:22:23.339715 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\": container with ID starting with 55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175 not found: ID does not exist" containerID="55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.339746 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175"} err="failed to get container status \"55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\": rpc error: code = NotFound desc = could not find container \"55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175\": container with ID starting with 55f1867d31a52379ec848a4afee92cbe7e45246502b65e7478ffca5bf0372175 not found: ID does not exist" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.339761 4754 scope.go:117] "RemoveContainer" containerID="ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9" Feb 18 19:22:23 crc kubenswrapper[4754]: E0218 19:22:23.339990 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\": container with ID starting with ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9 not found: ID does not exist" containerID="ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.340015 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9"} err="failed to get container status \"ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\": rpc error: code = NotFound desc = could not find container \"ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9\": container with ID starting with ecfec2cdf547916eb2ddaba10b023335990baa1b52756639bec55cbca48fb3d9 not found: ID does not exist" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.340031 4754 scope.go:117] "RemoveContainer" containerID="c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585" Feb 18 19:22:23 crc kubenswrapper[4754]: E0218 19:22:23.342013 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\": container with ID starting with c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585 not found: ID does not exist" containerID="c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.342046 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585"} err="failed to get container status \"c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\": rpc error: code = NotFound desc = could not find container \"c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585\": container with ID starting with c79e2cd8688b51a8272aa5d37d6809e12909cc97d90eea4b9ea92442be59b585 not found: ID does not exist" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.342064 4754 scope.go:117] "RemoveContainer" containerID="5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b" Feb 18 19:22:23 crc kubenswrapper[4754]: E0218 19:22:23.344053 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\": container with ID starting with 5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b not found: ID does not exist" containerID="5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b" Feb 18 19:22:23 crc kubenswrapper[4754]: I0218 19:22:23.344080 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b"} err="failed to get container status \"5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\": rpc error: code = NotFound desc = could not find container \"5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b\": container with ID starting with 5b4813fafb9d4149d9ef3dc3ee8ddcca68a4984fe8364f3a7b73bc53586e388b not found: ID does not exist" Feb 18 19:22:24 crc kubenswrapper[4754]: I0218 19:22:24.219041 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 19:22:25 crc kubenswrapper[4754]: E0218 19:22:25.057649 4754 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:25 crc kubenswrapper[4754]: I0218 19:22:25.058480 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:25 crc kubenswrapper[4754]: E0218 19:22:25.081494 4754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d9bf78887ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:22:25.080436666 +0000 UTC m=+247.530849452,LastTimestamp:2026-02-18 19:22:25.080436666 +0000 UTC m=+247.530849452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:22:25 crc kubenswrapper[4754]: I0218 19:22:25.234994 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0b19e0bc3d0dcf73bed38b757251214f74eb3eb40854fabd6965607007258d0e"} Feb 18 19:22:26 crc kubenswrapper[4754]: I0218 19:22:26.241440 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61"} Feb 18 19:22:26 crc kubenswrapper[4754]: E0218 19:22:26.242775 4754 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:26 crc kubenswrapper[4754]: I0218 19:22:26.242848 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.248305 4754 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.341637 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.342099 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.342720 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.343290 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.343757 4754 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:27 crc kubenswrapper[4754]: I0218 19:22:27.343790 4754 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.344062 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.545553 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Feb 18 19:22:27 crc kubenswrapper[4754]: E0218 19:22:27.947040 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Feb 18 19:22:28 crc kubenswrapper[4754]: I0218 19:22:28.213443 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:28 crc kubenswrapper[4754]: E0218 19:22:28.748310 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Feb 18 19:22:30 crc kubenswrapper[4754]: E0218 19:22:30.349441 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Feb 18 19:22:31 crc kubenswrapper[4754]: E0218 19:22:31.219241 4754 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" volumeName="registry-storage" Feb 18 19:22:32 crc kubenswrapper[4754]: I0218 19:22:32.209359 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:32 crc kubenswrapper[4754]: I0218 19:22:32.210984 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:32 crc kubenswrapper[4754]: I0218 19:22:32.236915 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:32 crc kubenswrapper[4754]: I0218 19:22:32.236973 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:32 crc kubenswrapper[4754]: E0218 19:22:32.237937 4754 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:32 crc kubenswrapper[4754]: I0218 19:22:32.239004 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.299108 4754 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="75e7c9e3f9df740f6f4dc1a290d1067e9cc3b1b93c5c872221fd3435c720703b" exitCode=0 Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.299246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"75e7c9e3f9df740f6f4dc1a290d1067e9cc3b1b93c5c872221fd3435c720703b"} Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.300576 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"239485188b8429492445b524eb5d5ea5a89ff7dc25ec0b92023ea2de7a43cb41"} Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.301223 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.301269 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:33 crc kubenswrapper[4754]: E0218 19:22:33.301919 4754 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.301924 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.305403 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.305655 4754 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61" exitCode=1 Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.305738 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61"} Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.306939 4754 scope.go:117] "RemoveContainer" containerID="fe92ac6d231ec4c445ffcd5dc7838722dcbf94cf67f2a0f0231ee424bee9ca61" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.309081 4754 status_manager.go:851] "Failed to get status for pod" podUID="e3c10de6-a522-4975-9705-210bf58415f8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.309942 4754 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Feb 18 19:22:33 crc kubenswrapper[4754]: I0218 19:22:33.411702 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerName="oauth-openshift" containerID="cri-o://d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a" gracePeriod=15 Feb 18 19:22:33 crc kubenswrapper[4754]: E0218 19:22:33.550158 4754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="6.4s" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.311340 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.316934 4754 generic.go:334] "Generic (PLEG): container finished" podID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerID="d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a" exitCode=0 Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.317048 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" event={"ID":"4c8554fb-ba0f-48ac-900b-01d5a0c007ab","Type":"ContainerDied","Data":"d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a"} Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.317116 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" event={"ID":"4c8554fb-ba0f-48ac-900b-01d5a0c007ab","Type":"ContainerDied","Data":"90f0006104261583b0db81b8699f24e8c5d2680332c6d23289a8e95aba566ee4"} Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.317071 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lt44t" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.317151 4754 scope.go:117] "RemoveContainer" containerID="d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.324789 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.324916 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f72666e8e88cf5184387abdd076f63dfeb9733d53d5b0d233be399e63cd05553"} Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.329105 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4edc1eea026684dc0121be62ff4d7b7ec603e985a44e03d02fa0b5ffd16f6d80"} Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.329162 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e4c5878955dabc95666a8dabd1deaab45f8131ce69d82685f63ba38fe304f8a4"} Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.329173 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7009187aab058a9d0797068096cdc9f829c80f84401b13a4a9ccfaae996dccba"} Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.347245 4754 scope.go:117] "RemoveContainer" containerID="d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a" Feb 18 19:22:34 crc kubenswrapper[4754]: E0218 19:22:34.347879 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a\": container with ID starting with d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a not found: ID does not exist" containerID="d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.347915 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a"} err="failed to get container status \"d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a\": rpc error: code = NotFound desc = could not find container \"d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a\": container with ID starting with d07dee80af5d56e30799bbe5052e789e30c41a22e0f9741972f390fe3d7e407a not found: ID does not exist" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465460 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-router-certs\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465515 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-policies\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465555 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-idp-0-file-data\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465581 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-login\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465602 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-serving-cert\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465636 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-trusted-ca-bundle\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465772 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-dir\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.465990 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.466757 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467040 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467099 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn7vt\" (UniqueName: \"kubernetes.io/projected/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-kube-api-access-dn7vt\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467427 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-provider-selection\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467731 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-cliconfig\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467762 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-service-ca\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467805 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-error\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467997 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.467831 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-ocp-branding-template\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468185 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-session\") pod \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\" (UID: \"4c8554fb-ba0f-48ac-900b-01d5a0c007ab\") " Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468325 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468653 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468673 4754 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468689 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468701 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.468712 4754 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.471956 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.472614 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.480252 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-kube-api-access-dn7vt" (OuterVolumeSpecName: "kube-api-access-dn7vt") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "kube-api-access-dn7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.481489 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.482561 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.484367 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.484706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.485028 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.485284 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4c8554fb-ba0f-48ac-900b-01d5a0c007ab" (UID: "4c8554fb-ba0f-48ac-900b-01d5a0c007ab"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.569982 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn7vt\" (UniqueName: \"kubernetes.io/projected/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-kube-api-access-dn7vt\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570027 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570044 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570056 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570067 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570077 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570088 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570100 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:34 crc kubenswrapper[4754]: I0218 19:22:34.570110 4754 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4c8554fb-ba0f-48ac-900b-01d5a0c007ab-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:35 crc kubenswrapper[4754]: I0218 19:22:35.338664 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b43cbfcb4d787474637eab15bce682d3904306fd328953ddde639d782ee5e735"} Feb 18 19:22:35 crc kubenswrapper[4754]: I0218 19:22:35.339018 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6229f7e301e2e55e3066ad5949696b5130666873f91a31d3d4776abd16827b7a"} Feb 18 19:22:35 crc kubenswrapper[4754]: I0218 19:22:35.339063 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:35 crc kubenswrapper[4754]: I0218 19:22:35.339252 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:35 crc kubenswrapper[4754]: I0218 19:22:35.339274 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:37 crc kubenswrapper[4754]: I0218 19:22:37.239624 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:37 crc kubenswrapper[4754]: I0218 19:22:37.239699 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:37 crc kubenswrapper[4754]: I0218 19:22:37.246218 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:39 crc kubenswrapper[4754]: I0218 19:22:39.229859 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:22:39 crc kubenswrapper[4754]: I0218 19:22:39.236885 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:22:39 crc kubenswrapper[4754]: I0218 19:22:39.365629 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:22:40 crc kubenswrapper[4754]: I0218 19:22:40.346987 4754 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:40 crc kubenswrapper[4754]: I0218 19:22:40.372625 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:40 crc kubenswrapper[4754]: I0218 19:22:40.372670 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:40 crc kubenswrapper[4754]: I0218 19:22:40.377986 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:40 crc kubenswrapper[4754]: I0218 19:22:40.383971 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="69e779b4-8f6f-4ecd-9ca4-b24448d75b9b" Feb 18 19:22:41 crc kubenswrapper[4754]: I0218 19:22:41.377388 4754 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:41 crc kubenswrapper[4754]: I0218 19:22:41.377862 4754 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cbb813d6-cecc-41a2-8649-7f47f6020d18" Feb 18 19:22:48 crc kubenswrapper[4754]: I0218 19:22:48.230054 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="69e779b4-8f6f-4ecd-9ca4-b24448d75b9b" Feb 18 19:22:50 crc kubenswrapper[4754]: I0218 19:22:50.551698 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:50 crc kubenswrapper[4754]: I0218 19:22:50.771680 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:22:50 crc kubenswrapper[4754]: I0218 19:22:50.822323 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:22:50 crc kubenswrapper[4754]: I0218 19:22:50.933290 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:22:50 crc kubenswrapper[4754]: I0218 19:22:50.959385 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:22:51 crc kubenswrapper[4754]: I0218 19:22:51.147792 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:22:51 crc kubenswrapper[4754]: I0218 19:22:51.159999 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:22:51 crc kubenswrapper[4754]: I0218 19:22:51.182798 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:22:51 crc kubenswrapper[4754]: I0218 19:22:51.294794 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:22:51 crc kubenswrapper[4754]: I0218 19:22:51.536767 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.019611 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.331377 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.359617 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.402744 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.553846 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.592789 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.690417 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.700238 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.772351 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:22:52 crc kubenswrapper[4754]: I0218 19:22:52.954240 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.034793 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.131405 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.208487 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.324394 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.437896 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.442681 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.476011 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.780340 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.782374 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.816205 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:22:53 crc kubenswrapper[4754]: I0218 19:22:53.880960 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.063570 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.306129 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.402202 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.413166 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.515675 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.712470 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:22:54 crc kubenswrapper[4754]: I0218 19:22:54.976458 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.027186 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.067933 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.165249 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.199009 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.292254 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.494828 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.573985 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.618831 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.629171 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.653708 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.739278 4754 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.744186 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lt44t","openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.744266 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.752793 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.770784 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.780532 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.7804989 podStartE2EDuration="15.7804989s" podCreationTimestamp="2026-02-18 19:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:55.774533825 +0000 UTC m=+278.224946621" watchObservedRunningTime="2026-02-18 19:22:55.7804989 +0000 UTC m=+278.230911716" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.806740 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.830415 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:22:55 crc kubenswrapper[4754]: I0218 19:22:55.856923 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.073387 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.134647 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.144071 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.184612 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.198664 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.224095 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" path="/var/lib/kubelet/pods/4c8554fb-ba0f-48ac-900b-01d5a0c007ab/volumes" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.247612 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.319085 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.391228 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.436468 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.480273 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.545804 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.726430 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.791687 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.882288 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.955479 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:22:56 crc kubenswrapper[4754]: I0218 19:22:56.974433 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.032638 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz"] Feb 18 19:22:57 crc kubenswrapper[4754]: E0218 19:22:57.033043 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c10de6-a522-4975-9705-210bf58415f8" containerName="installer" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.033069 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c10de6-a522-4975-9705-210bf58415f8" containerName="installer" Feb 18 19:22:57 crc kubenswrapper[4754]: E0218 19:22:57.033100 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerName="oauth-openshift" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.033116 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerName="oauth-openshift" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.033356 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c8554fb-ba0f-48ac-900b-01d5a0c007ab" containerName="oauth-openshift" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.033378 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c10de6-a522-4975-9705-210bf58415f8" containerName="installer" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.034092 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.039244 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.039729 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.042417 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.043642 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.043907 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.044237 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.044420 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.044555 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.044687 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.045423 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.045433 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.047832 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.053305 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz"] Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.053368 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.055025 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.059883 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.078882 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.096009 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.118350 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140649 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140739 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ee41f40-8a2a-40c4-b02b-6dd6817599af-audit-dir\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140793 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140825 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-session\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140915 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-error\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140945 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.140975 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.141213 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.141273 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.141297 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn2nq\" (UniqueName: \"kubernetes.io/projected/5ee41f40-8a2a-40c4-b02b-6dd6817599af-kube-api-access-cn2nq\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.141335 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-login\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.141382 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-audit-policies\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.155486 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.243005 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.243071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.243110 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.243133 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn2nq\" (UniqueName: \"kubernetes.io/projected/5ee41f40-8a2a-40c4-b02b-6dd6817599af-kube-api-access-cn2nq\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244287 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-login\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244330 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-audit-policies\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244357 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244383 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ee41f40-8a2a-40c4-b02b-6dd6817599af-audit-dir\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244405 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-session\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244448 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244479 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244502 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-error\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244534 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244617 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244718 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.244873 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ee41f40-8a2a-40c4-b02b-6dd6817599af-audit-dir\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.245277 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.245455 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ee41f40-8a2a-40c4-b02b-6dd6817599af-audit-policies\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.250603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.250788 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.253786 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.253856 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-error\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.253856 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-user-template-login\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.255337 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.255391 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-session\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.255463 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ee41f40-8a2a-40c4-b02b-6dd6817599af-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.260829 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.265726 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn2nq\" (UniqueName: \"kubernetes.io/projected/5ee41f40-8a2a-40c4-b02b-6dd6817599af-kube-api-access-cn2nq\") pod \"oauth-openshift-6dc597b7cf-gnbhz\" (UID: \"5ee41f40-8a2a-40c4-b02b-6dd6817599af\") " pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.296516 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.323492 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.357439 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.409709 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.493348 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.551211 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.557218 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.584480 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.664330 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.677867 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.818704 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.831831 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:22:57 crc kubenswrapper[4754]: I0218 19:22:57.930248 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.103757 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.196650 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.216108 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.388830 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.447032 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.544878 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.584324 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.703282 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.849252 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:22:58 crc kubenswrapper[4754]: I0218 19:22:58.985369 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:58.994194 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.054235 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.054313 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.083517 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.121545 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.135712 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.206114 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.213727 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.229234 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.314195 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.316194 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.402516 4754 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.415543 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.438693 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.471811 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.496800 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.498344 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.524852 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.700767 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.786444 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.817507 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.876226 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.916660 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.935952 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.943982 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:22:59 crc kubenswrapper[4754]: I0218 19:22:59.953898 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.062198 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.110320 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.143289 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.275156 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.313628 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.336964 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.364853 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.382662 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:23:00 crc kubenswrapper[4754]: E0218 19:23:00.457501 4754 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 18 19:23:00 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6" Netns:"/var/run/netns/fa6afb02-1cf8-4fe0-ad9a-8ebe0c4dafeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod "oauth-openshift-6dc597b7cf-gnbhz" not found Feb 18 19:23:00 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:23:00 crc kubenswrapper[4754]: > Feb 18 19:23:00 crc kubenswrapper[4754]: E0218 19:23:00.457589 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 18 19:23:00 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6" Netns:"/var/run/netns/fa6afb02-1cf8-4fe0-ad9a-8ebe0c4dafeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod "oauth-openshift-6dc597b7cf-gnbhz" not found Feb 18 19:23:00 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:23:00 crc kubenswrapper[4754]: > pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:00 crc kubenswrapper[4754]: E0218 19:23:00.457618 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 18 19:23:00 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6" Netns:"/var/run/netns/fa6afb02-1cf8-4fe0-ad9a-8ebe0c4dafeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod "oauth-openshift-6dc597b7cf-gnbhz" not found Feb 18 19:23:00 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:23:00 crc kubenswrapper[4754]: > pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:00 crc kubenswrapper[4754]: E0218 19:23:00.457690 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication(5ee41f40-8a2a-40c4-b02b-6dd6817599af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication(5ee41f40-8a2a-40c4-b02b-6dd6817599af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6\\\" Netns:\\\"/var/run/netns/fa6afb02-1cf8-4fe0-ad9a-8ebe0c4dafeb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=d9d9a4feb3d7a96f2ead1d239c315398c13a38cd05a64276a69dcee0f5f523c6;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod \\\"oauth-openshift-6dc597b7cf-gnbhz\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" podUID="5ee41f40-8a2a-40c4-b02b-6dd6817599af" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.502123 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.503213 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.504131 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.583511 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.586825 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.623453 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.623977 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.687991 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.719294 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.930027 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.941533 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:23:00 crc kubenswrapper[4754]: I0218 19:23:00.990971 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.011765 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.025015 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.031065 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.060503 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.163873 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.179310 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.192007 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.193427 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.240420 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.265098 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.363313 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.393720 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.395594 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.471425 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.477544 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.479433 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.497082 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.549430 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.568412 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.576071 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.617814 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.820324 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.833542 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.907077 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:23:01 crc kubenswrapper[4754]: I0218 19:23:01.989079 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.002862 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.125063 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.231024 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.281884 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.338428 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.359675 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.395461 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.574417 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.574769 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.610007 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.654477 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.724432 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.817254 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.856562 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.869016 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.914657 4754 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.914943 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61" gracePeriod=5 Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.928529 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:23:02 crc kubenswrapper[4754]: I0218 19:23:02.952436 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.187587 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.202197 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.207701 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.329687 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.356423 4754 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.414044 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:23:03 crc kubenswrapper[4754]: E0218 19:23:03.434518 4754 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 18 19:23:03 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc" Netns:"/var/run/netns/79f4ef5b-ec97-4655-a443-c4179b889a7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod "oauth-openshift-6dc597b7cf-gnbhz" not found Feb 18 19:23:03 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:23:03 crc kubenswrapper[4754]: > Feb 18 19:23:03 crc kubenswrapper[4754]: E0218 19:23:03.434647 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 18 19:23:03 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc" Netns:"/var/run/netns/79f4ef5b-ec97-4655-a443-c4179b889a7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod "oauth-openshift-6dc597b7cf-gnbhz" not found Feb 18 19:23:03 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:23:03 crc kubenswrapper[4754]: > pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:03 crc kubenswrapper[4754]: E0218 19:23:03.434679 4754 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 18 19:23:03 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc" Netns:"/var/run/netns/79f4ef5b-ec97-4655-a443-c4179b889a7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod "oauth-openshift-6dc597b7cf-gnbhz" not found Feb 18 19:23:03 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:23:03 crc kubenswrapper[4754]: > pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:03 crc kubenswrapper[4754]: E0218 19:23:03.434787 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication(5ee41f40-8a2a-40c4-b02b-6dd6817599af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication(5ee41f40-8a2a-40c4-b02b-6dd6817599af)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dc597b7cf-gnbhz_openshift-authentication_5ee41f40-8a2a-40c4-b02b-6dd6817599af_0(0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc): error adding pod openshift-authentication_oauth-openshift-6dc597b7cf-gnbhz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc\\\" Netns:\\\"/var/run/netns/79f4ef5b-ec97-4655-a443-c4179b889a7a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dc597b7cf-gnbhz;K8S_POD_INFRA_CONTAINER_ID=0bcdeb39ebe2d12de5845ed575112591d4cf1f787a5d5e96a6670ba9485d47dc;K8S_POD_UID=5ee41f40-8a2a-40c4-b02b-6dd6817599af\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz] networking: Multus: [openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz/5ee41f40-8a2a-40c4-b02b-6dd6817599af]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dc597b7cf-gnbhz in out of cluster comm: pod \\\"oauth-openshift-6dc597b7cf-gnbhz\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" podUID="5ee41f40-8a2a-40c4-b02b-6dd6817599af" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.543928 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:23:03 crc kubenswrapper[4754]: I0218 19:23:03.587689 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.178597 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.214829 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.324626 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.454311 4754 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.466330 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.469745 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.505369 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.536708 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.631444 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.633273 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.640190 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.747956 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.771912 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.866992 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.877920 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.894575 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.894909 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.907006 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:23:04 crc kubenswrapper[4754]: I0218 19:23:04.965743 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.088008 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.132222 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.219434 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.364263 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.489605 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.554446 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.563217 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:23:05 crc kubenswrapper[4754]: I0218 19:23:05.719204 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.126457 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.187685 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.443847 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.458174 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.462694 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.492764 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.494382 4754 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.532975 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.657757 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.705228 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.728229 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:23:06 crc kubenswrapper[4754]: I0218 19:23:06.795959 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:23:07 crc kubenswrapper[4754]: I0218 19:23:07.296091 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:23:07 crc kubenswrapper[4754]: I0218 19:23:07.578963 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:23:07 crc kubenswrapper[4754]: I0218 19:23:07.697354 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:23:07 crc kubenswrapper[4754]: I0218 19:23:07.857812 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:23:07 crc kubenswrapper[4754]: I0218 19:23:07.898637 4754 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:23:07 crc kubenswrapper[4754]: I0218 19:23:07.991264 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.130954 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.422216 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.451216 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.478865 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.478957 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.550435 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.550496 4754 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61" exitCode=137 Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.550606 4754 scope.go:117] "RemoveContainer" containerID="7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.550629 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556654 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556740 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556744 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556829 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556853 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556862 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556897 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.556913 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.557067 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.557629 4754 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.557703 4754 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.557716 4754 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.557728 4754 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.566349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.567565 4754 scope.go:117] "RemoveContainer" containerID="7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61" Feb 18 19:23:08 crc kubenswrapper[4754]: E0218 19:23:08.568052 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61\": container with ID starting with 7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61 not found: ID does not exist" containerID="7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.568111 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61"} err="failed to get container status \"7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61\": rpc error: code = NotFound desc = could not find container \"7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61\": container with ID starting with 7130758e8c2d158b10d9ece74ef9bf099de055c64674977243b9d3f698e83a61 not found: ID does not exist" Feb 18 19:23:08 crc kubenswrapper[4754]: I0218 19:23:08.659363 4754 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:10 crc kubenswrapper[4754]: I0218 19:23:10.220435 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 19:23:14 crc kubenswrapper[4754]: I0218 19:23:14.209286 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:14 crc kubenswrapper[4754]: I0218 19:23:14.210574 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:14 crc kubenswrapper[4754]: I0218 19:23:14.443271 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz"] Feb 18 19:23:14 crc kubenswrapper[4754]: I0218 19:23:14.595946 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" event={"ID":"5ee41f40-8a2a-40c4-b02b-6dd6817599af","Type":"ContainerStarted","Data":"0124aef06d65d38ccc6429d88aaa117339abfaf8fc32e73ec4aca25e6d58592c"} Feb 18 19:23:15 crc kubenswrapper[4754]: I0218 19:23:15.604079 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" event={"ID":"5ee41f40-8a2a-40c4-b02b-6dd6817599af","Type":"ContainerStarted","Data":"ba2ecf49b3d61bf6044c4853f1421be5ffcbb1c0afc891e6ba375da1f85fc17b"} Feb 18 19:23:15 crc kubenswrapper[4754]: I0218 19:23:15.605519 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:15 crc kubenswrapper[4754]: I0218 19:23:15.612808 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" Feb 18 19:23:15 crc kubenswrapper[4754]: I0218 19:23:15.636036 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6dc597b7cf-gnbhz" podStartSLOduration=67.636009158 podStartE2EDuration="1m7.636009158s" podCreationTimestamp="2026-02-18 19:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:15.632466422 +0000 UTC m=+298.082879248" watchObservedRunningTime="2026-02-18 19:23:15.636009158 +0000 UTC m=+298.086421954" Feb 18 19:23:18 crc kubenswrapper[4754]: I0218 19:23:18.005502 4754 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.509896 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9txkp"] Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.510801 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" podUID="67dcbf0b-40be-4fae-967b-d049b796d2f5" containerName="controller-manager" containerID="cri-o://a9fcb8505265517db86c99fc7b02ade25f69f0d4fda036642d820c71f50cf7b2" gracePeriod=30 Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.627010 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94"] Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.627559 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" podUID="d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" containerName="route-controller-manager" containerID="cri-o://c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803" gracePeriod=30 Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.686376 4754 generic.go:334] "Generic (PLEG): container finished" podID="67dcbf0b-40be-4fae-967b-d049b796d2f5" containerID="a9fcb8505265517db86c99fc7b02ade25f69f0d4fda036642d820c71f50cf7b2" exitCode=0 Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.686520 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" event={"ID":"67dcbf0b-40be-4fae-967b-d049b796d2f5","Type":"ContainerDied","Data":"a9fcb8505265517db86c99fc7b02ade25f69f0d4fda036642d820c71f50cf7b2"} Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.688606 4754 generic.go:334] "Generic (PLEG): container finished" podID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerID="c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede" exitCode=0 Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.688646 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" event={"ID":"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae","Type":"ContainerDied","Data":"c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede"} Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.689281 4754 scope.go:117] "RemoveContainer" containerID="c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede" Feb 18 19:23:26 crc kubenswrapper[4754]: I0218 19:23:26.986774 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.044168 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.135762 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brppz\" (UniqueName: \"kubernetes.io/projected/67dcbf0b-40be-4fae-967b-d049b796d2f5-kube-api-access-brppz\") pod \"67dcbf0b-40be-4fae-967b-d049b796d2f5\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.135869 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-config\") pod \"67dcbf0b-40be-4fae-967b-d049b796d2f5\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.135893 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-config\") pod \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.135922 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-serving-cert\") pod \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.135976 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-client-ca\") pod \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.136008 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-client-ca\") pod \"67dcbf0b-40be-4fae-967b-d049b796d2f5\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.136061 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67dcbf0b-40be-4fae-967b-d049b796d2f5-serving-cert\") pod \"67dcbf0b-40be-4fae-967b-d049b796d2f5\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.136107 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pr76\" (UniqueName: \"kubernetes.io/projected/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-kube-api-access-4pr76\") pod \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\" (UID: \"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.136181 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-proxy-ca-bundles\") pod \"67dcbf0b-40be-4fae-967b-d049b796d2f5\" (UID: \"67dcbf0b-40be-4fae-967b-d049b796d2f5\") " Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.136924 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" (UID: "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.136992 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-config" (OuterVolumeSpecName: "config") pod "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" (UID: "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.140060 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "67dcbf0b-40be-4fae-967b-d049b796d2f5" (UID: "67dcbf0b-40be-4fae-967b-d049b796d2f5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.140103 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-config" (OuterVolumeSpecName: "config") pod "67dcbf0b-40be-4fae-967b-d049b796d2f5" (UID: "67dcbf0b-40be-4fae-967b-d049b796d2f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.140090 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "67dcbf0b-40be-4fae-967b-d049b796d2f5" (UID: "67dcbf0b-40be-4fae-967b-d049b796d2f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.143197 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-kube-api-access-4pr76" (OuterVolumeSpecName: "kube-api-access-4pr76") pod "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" (UID: "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6"). InnerVolumeSpecName "kube-api-access-4pr76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.144425 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67dcbf0b-40be-4fae-967b-d049b796d2f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "67dcbf0b-40be-4fae-967b-d049b796d2f5" (UID: "67dcbf0b-40be-4fae-967b-d049b796d2f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.145227 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" (UID: "d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.145469 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dcbf0b-40be-4fae-967b-d049b796d2f5-kube-api-access-brppz" (OuterVolumeSpecName: "kube-api-access-brppz") pod "67dcbf0b-40be-4fae-967b-d049b796d2f5" (UID: "67dcbf0b-40be-4fae-967b-d049b796d2f5"). InnerVolumeSpecName "kube-api-access-brppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237283 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67dcbf0b-40be-4fae-967b-d049b796d2f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237335 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pr76\" (UniqueName: \"kubernetes.io/projected/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-kube-api-access-4pr76\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237349 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237361 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brppz\" (UniqueName: \"kubernetes.io/projected/67dcbf0b-40be-4fae-967b-d049b796d2f5-kube-api-access-brppz\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237375 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237385 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237393 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237402 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.237410 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/67dcbf0b-40be-4fae-967b-d049b796d2f5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.698365 4754 generic.go:334] "Generic (PLEG): container finished" podID="d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" containerID="c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803" exitCode=0 Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.698454 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.698467 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" event={"ID":"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6","Type":"ContainerDied","Data":"c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803"} Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.698859 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94" event={"ID":"d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6","Type":"ContainerDied","Data":"2a676fe8cb20b5033b5cc639b17a1309d4d3c9dd862ee6cae62d41f9a58df9a2"} Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.698883 4754 scope.go:117] "RemoveContainer" containerID="c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.701288 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" event={"ID":"67dcbf0b-40be-4fae-967b-d049b796d2f5","Type":"ContainerDied","Data":"4e0650903b8732d56ae8d25ee102380382a60762fbecc1ea28ee40a3b284e085"} Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.701334 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9txkp" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.704728 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" event={"ID":"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae","Type":"ContainerStarted","Data":"3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271"} Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.705844 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.711112 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.745328 4754 scope.go:117] "RemoveContainer" containerID="c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803" Feb 18 19:23:27 crc kubenswrapper[4754]: E0218 19:23:27.746588 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803\": container with ID starting with c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803 not found: ID does not exist" containerID="c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.746715 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803"} err="failed to get container status \"c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803\": rpc error: code = NotFound desc = could not find container \"c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803\": container with ID starting with c271a3a1fffbde3714589a265212b4f6c1da7ca6728d2666120cb76a177eb803 not found: ID does not exist" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.746767 4754 scope.go:117] "RemoveContainer" containerID="a9fcb8505265517db86c99fc7b02ade25f69f0d4fda036642d820c71f50cf7b2" Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.785119 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94"] Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.789446 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h9n94"] Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.792668 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9txkp"] Feb 18 19:23:27 crc kubenswrapper[4754]: I0218 19:23:27.799666 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9txkp"] Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.240971 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67dcbf0b-40be-4fae-967b-d049b796d2f5" path="/var/lib/kubelet/pods/67dcbf0b-40be-4fae-967b-d049b796d2f5/volumes" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.241836 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" path="/var/lib/kubelet/pods/d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6/volumes" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.386078 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk"] Feb 18 19:23:28 crc kubenswrapper[4754]: E0218 19:23:28.387035 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.387276 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:23:28 crc kubenswrapper[4754]: E0218 19:23:28.387432 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dcbf0b-40be-4fae-967b-d049b796d2f5" containerName="controller-manager" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.387556 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dcbf0b-40be-4fae-967b-d049b796d2f5" containerName="controller-manager" Feb 18 19:23:28 crc kubenswrapper[4754]: E0218 19:23:28.387695 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" containerName="route-controller-manager" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.387815 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" containerName="route-controller-manager" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.388537 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dcbf0b-40be-4fae-967b-d049b796d2f5" containerName="controller-manager" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.388701 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.388863 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d130569b-03ff-4e1b-8cd4-cf0eca8ff4e6" containerName="route-controller-manager" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.390442 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.395425 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-zxgdt"] Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.396376 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.396626 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.396797 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.397005 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.397252 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.397680 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.398292 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.402553 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.402668 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.403222 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.403581 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.403817 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.405774 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.410495 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-zxgdt"] Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.424169 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk"] Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.440484 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458808 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7mv4\" (UniqueName: \"kubernetes.io/projected/93ed2cbd-9ed4-4c84-b55b-edcd90138561-kube-api-access-s7mv4\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458868 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ed2cbd-9ed4-4c84-b55b-edcd90138561-serving-cert\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458898 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-config\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458918 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzx8r\" (UniqueName: \"kubernetes.io/projected/b08b377e-d75f-43a6-bd4d-b9f91255afa6-kube-api-access-jzx8r\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458941 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-client-ca\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-config\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.458990 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08b377e-d75f-43a6-bd4d-b9f91255afa6-serving-cert\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.459040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-client-ca\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.459060 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-proxy-ca-bundles\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.560813 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-client-ca\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.560888 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-proxy-ca-bundles\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.560991 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7mv4\" (UniqueName: \"kubernetes.io/projected/93ed2cbd-9ed4-4c84-b55b-edcd90138561-kube-api-access-s7mv4\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.561528 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ed2cbd-9ed4-4c84-b55b-edcd90138561-serving-cert\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.561584 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-config\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.561617 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzx8r\" (UniqueName: \"kubernetes.io/projected/b08b377e-d75f-43a6-bd4d-b9f91255afa6-kube-api-access-jzx8r\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.561667 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-client-ca\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.561714 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-config\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.561768 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08b377e-d75f-43a6-bd4d-b9f91255afa6-serving-cert\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.562242 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-client-ca\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.562255 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-proxy-ca-bundles\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.563294 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-client-ca\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.563834 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-config\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.565839 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-config\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.568113 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08b377e-d75f-43a6-bd4d-b9f91255afa6-serving-cert\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.578760 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ed2cbd-9ed4-4c84-b55b-edcd90138561-serving-cert\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.580429 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzx8r\" (UniqueName: \"kubernetes.io/projected/b08b377e-d75f-43a6-bd4d-b9f91255afa6-kube-api-access-jzx8r\") pod \"controller-manager-598fcbc587-zxgdt\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.582174 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7mv4\" (UniqueName: \"kubernetes.io/projected/93ed2cbd-9ed4-4c84-b55b-edcd90138561-kube-api-access-s7mv4\") pod \"route-controller-manager-86c947bc4c-hjkkk\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.713891 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:28 crc kubenswrapper[4754]: I0218 19:23:28.726749 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.013377 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk"] Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.050941 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-zxgdt"] Feb 18 19:23:29 crc kubenswrapper[4754]: W0218 19:23:29.056641 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb08b377e_d75f_43a6_bd4d_b9f91255afa6.slice/crio-872040d24979dfd837e304d730a182b3b366dad34df1021af2168936b2d62294 WatchSource:0}: Error finding container 872040d24979dfd837e304d730a182b3b366dad34df1021af2168936b2d62294: Status 404 returned error can't find the container with id 872040d24979dfd837e304d730a182b3b366dad34df1021af2168936b2d62294 Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.728472 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" event={"ID":"b08b377e-d75f-43a6-bd4d-b9f91255afa6","Type":"ContainerStarted","Data":"f70066021b02afe697b9568338b2fd3daca17cb071b08ce0343327bf614b2a60"} Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.728560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" event={"ID":"b08b377e-d75f-43a6-bd4d-b9f91255afa6","Type":"ContainerStarted","Data":"872040d24979dfd837e304d730a182b3b366dad34df1021af2168936b2d62294"} Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.728996 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.736095 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" event={"ID":"93ed2cbd-9ed4-4c84-b55b-edcd90138561","Type":"ContainerStarted","Data":"bd84c055fb62b221ae5e400c0bf513f9003b6e1381316a6b756b8f05cd4ed5b6"} Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.736120 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" event={"ID":"93ed2cbd-9ed4-4c84-b55b-edcd90138561","Type":"ContainerStarted","Data":"cd62d55f5d1e403a49f29afbe77287aad9d51692fa06ad65f4056a7e16cb622f"} Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.736154 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.736365 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.741030 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.756940 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" podStartSLOduration=3.7569180810000002 podStartE2EDuration="3.756918081s" podCreationTimestamp="2026-02-18 19:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:29.753457408 +0000 UTC m=+312.203870234" watchObservedRunningTime="2026-02-18 19:23:29.756918081 +0000 UTC m=+312.207330877" Feb 18 19:23:29 crc kubenswrapper[4754]: I0218 19:23:29.776893 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" podStartSLOduration=3.776867422 podStartE2EDuration="3.776867422s" podCreationTimestamp="2026-02-18 19:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:29.775214373 +0000 UTC m=+312.225627179" watchObservedRunningTime="2026-02-18 19:23:29.776867422 +0000 UTC m=+312.227280218" Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.583687 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-zxgdt"] Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.584719 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" podUID="b08b377e-d75f-43a6-bd4d-b9f91255afa6" containerName="controller-manager" containerID="cri-o://f70066021b02afe697b9568338b2fd3daca17cb071b08ce0343327bf614b2a60" gracePeriod=30 Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.654625 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk"] Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.654999 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" podUID="93ed2cbd-9ed4-4c84-b55b-edcd90138561" containerName="route-controller-manager" containerID="cri-o://bd84c055fb62b221ae5e400c0bf513f9003b6e1381316a6b756b8f05cd4ed5b6" gracePeriod=30 Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.856093 4754 generic.go:334] "Generic (PLEG): container finished" podID="b08b377e-d75f-43a6-bd4d-b9f91255afa6" containerID="f70066021b02afe697b9568338b2fd3daca17cb071b08ce0343327bf614b2a60" exitCode=0 Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.856238 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" event={"ID":"b08b377e-d75f-43a6-bd4d-b9f91255afa6","Type":"ContainerDied","Data":"f70066021b02afe697b9568338b2fd3daca17cb071b08ce0343327bf614b2a60"} Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.858567 4754 generic.go:334] "Generic (PLEG): container finished" podID="93ed2cbd-9ed4-4c84-b55b-edcd90138561" containerID="bd84c055fb62b221ae5e400c0bf513f9003b6e1381316a6b756b8f05cd4ed5b6" exitCode=0 Feb 18 19:23:46 crc kubenswrapper[4754]: I0218 19:23:46.858605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" event={"ID":"93ed2cbd-9ed4-4c84-b55b-edcd90138561","Type":"ContainerDied","Data":"bd84c055fb62b221ae5e400c0bf513f9003b6e1381316a6b756b8f05cd4ed5b6"} Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.137475 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.211307 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.244751 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7mv4\" (UniqueName: \"kubernetes.io/projected/93ed2cbd-9ed4-4c84-b55b-edcd90138561-kube-api-access-s7mv4\") pod \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.244811 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-client-ca\") pod \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.244843 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-config\") pod \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.244994 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ed2cbd-9ed4-4c84-b55b-edcd90138561-serving-cert\") pod \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\" (UID: \"93ed2cbd-9ed4-4c84-b55b-edcd90138561\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.246910 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-client-ca" (OuterVolumeSpecName: "client-ca") pod "93ed2cbd-9ed4-4c84-b55b-edcd90138561" (UID: "93ed2cbd-9ed4-4c84-b55b-edcd90138561"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.246938 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-config" (OuterVolumeSpecName: "config") pod "93ed2cbd-9ed4-4c84-b55b-edcd90138561" (UID: "93ed2cbd-9ed4-4c84-b55b-edcd90138561"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.252393 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ed2cbd-9ed4-4c84-b55b-edcd90138561-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93ed2cbd-9ed4-4c84-b55b-edcd90138561" (UID: "93ed2cbd-9ed4-4c84-b55b-edcd90138561"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.252867 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ed2cbd-9ed4-4c84-b55b-edcd90138561-kube-api-access-s7mv4" (OuterVolumeSpecName: "kube-api-access-s7mv4") pod "93ed2cbd-9ed4-4c84-b55b-edcd90138561" (UID: "93ed2cbd-9ed4-4c84-b55b-edcd90138561"). InnerVolumeSpecName "kube-api-access-s7mv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.345781 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-proxy-ca-bundles\") pod \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.345867 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-client-ca\") pod \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.345952 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-config\") pod \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.345997 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzx8r\" (UniqueName: \"kubernetes.io/projected/b08b377e-d75f-43a6-bd4d-b9f91255afa6-kube-api-access-jzx8r\") pod \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.346046 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08b377e-d75f-43a6-bd4d-b9f91255afa6-serving-cert\") pod \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\" (UID: \"b08b377e-d75f-43a6-bd4d-b9f91255afa6\") " Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.346507 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.346533 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ed2cbd-9ed4-4c84-b55b-edcd90138561-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.346548 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7mv4\" (UniqueName: \"kubernetes.io/projected/93ed2cbd-9ed4-4c84-b55b-edcd90138561-kube-api-access-s7mv4\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.346564 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93ed2cbd-9ed4-4c84-b55b-edcd90138561-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.348015 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b08b377e-d75f-43a6-bd4d-b9f91255afa6" (UID: "b08b377e-d75f-43a6-bd4d-b9f91255afa6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.348361 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-client-ca" (OuterVolumeSpecName: "client-ca") pod "b08b377e-d75f-43a6-bd4d-b9f91255afa6" (UID: "b08b377e-d75f-43a6-bd4d-b9f91255afa6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.349630 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-config" (OuterVolumeSpecName: "config") pod "b08b377e-d75f-43a6-bd4d-b9f91255afa6" (UID: "b08b377e-d75f-43a6-bd4d-b9f91255afa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.352168 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08b377e-d75f-43a6-bd4d-b9f91255afa6-kube-api-access-jzx8r" (OuterVolumeSpecName: "kube-api-access-jzx8r") pod "b08b377e-d75f-43a6-bd4d-b9f91255afa6" (UID: "b08b377e-d75f-43a6-bd4d-b9f91255afa6"). InnerVolumeSpecName "kube-api-access-jzx8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.352629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b08b377e-d75f-43a6-bd4d-b9f91255afa6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b08b377e-d75f-43a6-bd4d-b9f91255afa6" (UID: "b08b377e-d75f-43a6-bd4d-b9f91255afa6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.448298 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.448343 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzx8r\" (UniqueName: \"kubernetes.io/projected/b08b377e-d75f-43a6-bd4d-b9f91255afa6-kube-api-access-jzx8r\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.448358 4754 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08b377e-d75f-43a6-bd4d-b9f91255afa6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.448367 4754 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.448376 4754 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08b377e-d75f-43a6-bd4d-b9f91255afa6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.899605 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.899654 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-zxgdt" event={"ID":"b08b377e-d75f-43a6-bd4d-b9f91255afa6","Type":"ContainerDied","Data":"872040d24979dfd837e304d730a182b3b366dad34df1021af2168936b2d62294"} Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.900024 4754 scope.go:117] "RemoveContainer" containerID="f70066021b02afe697b9568338b2fd3daca17cb071b08ce0343327bf614b2a60" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.909661 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" event={"ID":"93ed2cbd-9ed4-4c84-b55b-edcd90138561","Type":"ContainerDied","Data":"cd62d55f5d1e403a49f29afbe77287aad9d51692fa06ad65f4056a7e16cb622f"} Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.909804 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.952292 4754 scope.go:117] "RemoveContainer" containerID="bd84c055fb62b221ae5e400c0bf513f9003b6e1381316a6b756b8f05cd4ed5b6" Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.962492 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-zxgdt"] Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.973887 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-zxgdt"] Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.979839 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk"] Feb 18 19:23:47 crc kubenswrapper[4754]: I0218 19:23:47.984348 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-hjkkk"] Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.216872 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ed2cbd-9ed4-4c84-b55b-edcd90138561" path="/var/lib/kubelet/pods/93ed2cbd-9ed4-4c84-b55b-edcd90138561/volumes" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.217488 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08b377e-d75f-43a6-bd4d-b9f91255afa6" path="/var/lib/kubelet/pods/b08b377e-d75f-43a6-bd4d-b9f91255afa6/volumes" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.419734 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5475699776-w4kbv"] Feb 18 19:23:48 crc kubenswrapper[4754]: E0218 19:23:48.420050 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08b377e-d75f-43a6-bd4d-b9f91255afa6" containerName="controller-manager" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.420069 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08b377e-d75f-43a6-bd4d-b9f91255afa6" containerName="controller-manager" Feb 18 19:23:48 crc kubenswrapper[4754]: E0218 19:23:48.420094 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ed2cbd-9ed4-4c84-b55b-edcd90138561" containerName="route-controller-manager" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.420103 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ed2cbd-9ed4-4c84-b55b-edcd90138561" containerName="route-controller-manager" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.420216 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08b377e-d75f-43a6-bd4d-b9f91255afa6" containerName="controller-manager" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.420236 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ed2cbd-9ed4-4c84-b55b-edcd90138561" containerName="route-controller-manager" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.420699 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.422787 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.422929 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.424009 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.424310 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.426942 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.427043 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.428784 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv"] Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.429645 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.432183 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.432216 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.432462 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.432562 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.432708 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.432777 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.434617 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.437840 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv"] Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.445484 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5475699776-w4kbv"] Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574192 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tf6\" (UniqueName: \"kubernetes.io/projected/64043b56-4c43-4610-aecd-061953e35884-kube-api-access-52tf6\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574315 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64043b56-4c43-4610-aecd-061953e35884-config\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574460 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-proxy-ca-bundles\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574563 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4gh\" (UniqueName: \"kubernetes.io/projected/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-kube-api-access-br4gh\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574610 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-config\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574758 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-serving-cert\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574841 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64043b56-4c43-4610-aecd-061953e35884-client-ca\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-client-ca\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.574982 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64043b56-4c43-4610-aecd-061953e35884-serving-cert\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.676600 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4gh\" (UniqueName: \"kubernetes.io/projected/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-kube-api-access-br4gh\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677111 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-config\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677162 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-serving-cert\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677196 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64043b56-4c43-4610-aecd-061953e35884-client-ca\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677225 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-client-ca\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64043b56-4c43-4610-aecd-061953e35884-serving-cert\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677305 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52tf6\" (UniqueName: \"kubernetes.io/projected/64043b56-4c43-4610-aecd-061953e35884-kube-api-access-52tf6\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677365 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64043b56-4c43-4610-aecd-061953e35884-config\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.677398 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-proxy-ca-bundles\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.678452 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/64043b56-4c43-4610-aecd-061953e35884-client-ca\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.679005 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-config\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.679067 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-proxy-ca-bundles\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.679371 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-client-ca\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.679500 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64043b56-4c43-4610-aecd-061953e35884-config\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.683996 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64043b56-4c43-4610-aecd-061953e35884-serving-cert\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.695858 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-serving-cert\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.696549 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4gh\" (UniqueName: \"kubernetes.io/projected/f38e076d-a3f2-41de-a26a-e2e82a3cdfb3-kube-api-access-br4gh\") pod \"controller-manager-5475699776-w4kbv\" (UID: \"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3\") " pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.700603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tf6\" (UniqueName: \"kubernetes.io/projected/64043b56-4c43-4610-aecd-061953e35884-kube-api-access-52tf6\") pod \"route-controller-manager-6d5b49b6fd-vbgkv\" (UID: \"64043b56-4c43-4610-aecd-061953e35884\") " pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.745919 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:48 crc kubenswrapper[4754]: I0218 19:23:48.755127 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.079058 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv"] Feb 18 19:23:49 crc kubenswrapper[4754]: W0218 19:23:49.086106 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64043b56_4c43_4610_aecd_061953e35884.slice/crio-52607cef75ea2cb5ce53894eb9fdccd4e854ea6266ad122413f29565e94652bd WatchSource:0}: Error finding container 52607cef75ea2cb5ce53894eb9fdccd4e854ea6266ad122413f29565e94652bd: Status 404 returned error can't find the container with id 52607cef75ea2cb5ce53894eb9fdccd4e854ea6266ad122413f29565e94652bd Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.219834 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5475699776-w4kbv"] Feb 18 19:23:49 crc kubenswrapper[4754]: W0218 19:23:49.230591 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf38e076d_a3f2_41de_a26a_e2e82a3cdfb3.slice/crio-6d96f374569e719b04ee47da79f5145b33673d9ccb5e3da6ee1586c7dd3441e5 WatchSource:0}: Error finding container 6d96f374569e719b04ee47da79f5145b33673d9ccb5e3da6ee1586c7dd3441e5: Status 404 returned error can't find the container with id 6d96f374569e719b04ee47da79f5145b33673d9ccb5e3da6ee1586c7dd3441e5 Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.933243 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" event={"ID":"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3","Type":"ContainerStarted","Data":"53890da77fd3d1640874b3353d95c593d0cf88e6e760a328eab17fdc5e952edc"} Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.933786 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" event={"ID":"f38e076d-a3f2-41de-a26a-e2e82a3cdfb3","Type":"ContainerStarted","Data":"6d96f374569e719b04ee47da79f5145b33673d9ccb5e3da6ee1586c7dd3441e5"} Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.933816 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.934915 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" event={"ID":"64043b56-4c43-4610-aecd-061953e35884","Type":"ContainerStarted","Data":"9b195094202d6f1b64e089b971fb1a11b4a13343d8b9eb0d19bc6baefbc922cf"} Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.934991 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" event={"ID":"64043b56-4c43-4610-aecd-061953e35884","Type":"ContainerStarted","Data":"52607cef75ea2cb5ce53894eb9fdccd4e854ea6266ad122413f29565e94652bd"} Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.935261 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.939602 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.940229 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" Feb 18 19:23:49 crc kubenswrapper[4754]: I0218 19:23:49.955839 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5475699776-w4kbv" podStartSLOduration=3.955790743 podStartE2EDuration="3.955790743s" podCreationTimestamp="2026-02-18 19:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:49.94994465 +0000 UTC m=+332.400357446" watchObservedRunningTime="2026-02-18 19:23:49.955790743 +0000 UTC m=+332.406203589" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.041955 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d5b49b6fd-vbgkv" podStartSLOduration=4.04192596 podStartE2EDuration="4.04192596s" podCreationTimestamp="2026-02-18 19:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:50.019800343 +0000 UTC m=+332.470213139" watchObservedRunningTime="2026-02-18 19:23:50.04192596 +0000 UTC m=+332.492338756" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.702552 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c6m7k"] Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.703785 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.733177 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c6m7k"] Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815202 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815303 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-registry-tls\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815367 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-bound-sa-token\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815453 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9439b659-3f04-4ee4-a1cf-ffd424015aed-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxgdw\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-kube-api-access-jxgdw\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815549 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9439b659-3f04-4ee4-a1cf-ffd424015aed-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815593 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9439b659-3f04-4ee4-a1cf-ffd424015aed-trusted-ca\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.815716 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9439b659-3f04-4ee4-a1cf-ffd424015aed-registry-certificates\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.856829 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.916856 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9439b659-3f04-4ee4-a1cf-ffd424015aed-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.917182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxgdw\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-kube-api-access-jxgdw\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.917306 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9439b659-3f04-4ee4-a1cf-ffd424015aed-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.917397 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9439b659-3f04-4ee4-a1cf-ffd424015aed-trusted-ca\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.917524 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9439b659-3f04-4ee4-a1cf-ffd424015aed-registry-certificates\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.918024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9439b659-3f04-4ee4-a1cf-ffd424015aed-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.918608 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9439b659-3f04-4ee4-a1cf-ffd424015aed-trusted-ca\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.918892 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9439b659-3f04-4ee4-a1cf-ffd424015aed-registry-certificates\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.919036 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-registry-tls\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.919585 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-bound-sa-token\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.926085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9439b659-3f04-4ee4-a1cf-ffd424015aed-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.932834 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-registry-tls\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.950993 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxgdw\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-kube-api-access-jxgdw\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:50 crc kubenswrapper[4754]: I0218 19:23:50.958864 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9439b659-3f04-4ee4-a1cf-ffd424015aed-bound-sa-token\") pod \"image-registry-66df7c8f76-c6m7k\" (UID: \"9439b659-3f04-4ee4-a1cf-ffd424015aed\") " pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:51 crc kubenswrapper[4754]: I0218 19:23:51.022472 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:51 crc kubenswrapper[4754]: I0218 19:23:51.521810 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c6m7k"] Feb 18 19:23:51 crc kubenswrapper[4754]: W0218 19:23:51.525567 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9439b659_3f04_4ee4_a1cf_ffd424015aed.slice/crio-ef9ff2e00899f386a37aa48dd41fbee4aa0407579f3cc653de28e42f53b6991c WatchSource:0}: Error finding container ef9ff2e00899f386a37aa48dd41fbee4aa0407579f3cc653de28e42f53b6991c: Status 404 returned error can't find the container with id ef9ff2e00899f386a37aa48dd41fbee4aa0407579f3cc653de28e42f53b6991c Feb 18 19:23:51 crc kubenswrapper[4754]: I0218 19:23:51.950965 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" event={"ID":"9439b659-3f04-4ee4-a1cf-ffd424015aed","Type":"ContainerStarted","Data":"7c861ffbeb64c2ab703d760b73135ca3256908f268a26d79db75a8769fdafff9"} Feb 18 19:23:51 crc kubenswrapper[4754]: I0218 19:23:51.951566 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" event={"ID":"9439b659-3f04-4ee4-a1cf-ffd424015aed","Type":"ContainerStarted","Data":"ef9ff2e00899f386a37aa48dd41fbee4aa0407579f3cc653de28e42f53b6991c"} Feb 18 19:23:51 crc kubenswrapper[4754]: I0218 19:23:51.951627 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:23:51 crc kubenswrapper[4754]: I0218 19:23:51.977966 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" podStartSLOduration=1.977935472 podStartE2EDuration="1.977935472s" podCreationTimestamp="2026-02-18 19:23:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:51.976583572 +0000 UTC m=+334.426996398" watchObservedRunningTime="2026-02-18 19:23:51.977935472 +0000 UTC m=+334.428348278" Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.927874 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x9nlb"] Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.930884 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x9nlb" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="registry-server" containerID="cri-o://f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923" gracePeriod=30 Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.956091 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6p9z2"] Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.956518 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6p9z2" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="registry-server" containerID="cri-o://2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516" gracePeriod=30 Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.970993 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wljc4"] Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.971272 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" containerID="cri-o://3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271" gracePeriod=30 Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.981732 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2vnz"] Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.982125 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k2vnz" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="registry-server" containerID="cri-o://2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9" gracePeriod=30 Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.990085 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqd9p"] Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.990510 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kqd9p" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="registry-server" containerID="cri-o://99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da" gracePeriod=30 Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.994794 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxtt4"] Feb 18 19:23:55 crc kubenswrapper[4754]: I0218 19:23:55.996288 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.006881 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxtt4"] Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.103954 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72accc0b-fc2a-42ee-9d17-cad3b19abed3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.104569 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72accc0b-fc2a-42ee-9d17-cad3b19abed3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.104643 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khsw\" (UniqueName: \"kubernetes.io/projected/72accc0b-fc2a-42ee-9d17-cad3b19abed3-kube-api-access-5khsw\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.206433 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72accc0b-fc2a-42ee-9d17-cad3b19abed3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.206489 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72accc0b-fc2a-42ee-9d17-cad3b19abed3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.206537 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5khsw\" (UniqueName: \"kubernetes.io/projected/72accc0b-fc2a-42ee-9d17-cad3b19abed3-kube-api-access-5khsw\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.208418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72accc0b-fc2a-42ee-9d17-cad3b19abed3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.217868 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72accc0b-fc2a-42ee-9d17-cad3b19abed3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.225247 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5khsw\" (UniqueName: \"kubernetes.io/projected/72accc0b-fc2a-42ee-9d17-cad3b19abed3-kube-api-access-5khsw\") pod \"marketplace-operator-79b997595-vxtt4\" (UID: \"72accc0b-fc2a-42ee-9d17-cad3b19abed3\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.393269 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.557055 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.611889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqhd\" (UniqueName: \"kubernetes.io/projected/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-kube-api-access-7vqhd\") pod \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.611969 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-catalog-content\") pod \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.612046 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-utilities\") pod \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\" (UID: \"a91e02b9-77f2-4adc-8255-ef6dca75c2cf\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.614401 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-utilities" (OuterVolumeSpecName: "utilities") pod "a91e02b9-77f2-4adc-8255-ef6dca75c2cf" (UID: "a91e02b9-77f2-4adc-8255-ef6dca75c2cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.616629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-kube-api-access-7vqhd" (OuterVolumeSpecName: "kube-api-access-7vqhd") pod "a91e02b9-77f2-4adc-8255-ef6dca75c2cf" (UID: "a91e02b9-77f2-4adc-8255-ef6dca75c2cf"). InnerVolumeSpecName "kube-api-access-7vqhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.684312 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a91e02b9-77f2-4adc-8255-ef6dca75c2cf" (UID: "a91e02b9-77f2-4adc-8255-ef6dca75c2cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.689016 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.713162 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.713193 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqhd\" (UniqueName: \"kubernetes.io/projected/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-kube-api-access-7vqhd\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.713207 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91e02b9-77f2-4adc-8255-ef6dca75c2cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.713400 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.736702 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.745773 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814344 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-utilities\") pod \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814404 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcv9v\" (UniqueName: \"kubernetes.io/projected/cabec9c1-d434-4382-87e9-c488658c02fe-kube-api-access-fcv9v\") pod \"cabec9c1-d434-4382-87e9-c488658c02fe\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814431 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-utilities\") pod \"cabec9c1-d434-4382-87e9-c488658c02fe\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814476 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-operator-metrics\") pod \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814557 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-trusted-ca\") pod \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814574 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-catalog-content\") pod \"cabec9c1-d434-4382-87e9-c488658c02fe\" (UID: \"cabec9c1-d434-4382-87e9-c488658c02fe\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814614 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szs7j\" (UniqueName: \"kubernetes.io/projected/51b42468-3dc4-425d-ae7c-de59263bbf39-kube-api-access-szs7j\") pod \"51b42468-3dc4-425d-ae7c-de59263bbf39\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814638 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-catalog-content\") pod \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814657 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h7hm\" (UniqueName: \"kubernetes.io/projected/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-kube-api-access-5h7hm\") pod \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\" (UID: \"4699f1a8-9e55-49b6-a67f-f84bd256fa0f\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814682 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-utilities\") pod \"51b42468-3dc4-425d-ae7c-de59263bbf39\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814711 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqc8b\" (UniqueName: \"kubernetes.io/projected/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-kube-api-access-tqc8b\") pod \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\" (UID: \"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.814735 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-catalog-content\") pod \"51b42468-3dc4-425d-ae7c-de59263bbf39\" (UID: \"51b42468-3dc4-425d-ae7c-de59263bbf39\") " Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.817949 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-utilities" (OuterVolumeSpecName: "utilities") pod "4699f1a8-9e55-49b6-a67f-f84bd256fa0f" (UID: "4699f1a8-9e55-49b6-a67f-f84bd256fa0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.819546 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-utilities" (OuterVolumeSpecName: "utilities") pod "cabec9c1-d434-4382-87e9-c488658c02fe" (UID: "cabec9c1-d434-4382-87e9-c488658c02fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.820645 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-utilities" (OuterVolumeSpecName: "utilities") pod "51b42468-3dc4-425d-ae7c-de59263bbf39" (UID: "51b42468-3dc4-425d-ae7c-de59263bbf39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.822001 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" (UID: "8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.823083 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" (UID: "8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.823350 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-kube-api-access-5h7hm" (OuterVolumeSpecName: "kube-api-access-5h7hm") pod "4699f1a8-9e55-49b6-a67f-f84bd256fa0f" (UID: "4699f1a8-9e55-49b6-a67f-f84bd256fa0f"). InnerVolumeSpecName "kube-api-access-5h7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.824812 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-kube-api-access-tqc8b" (OuterVolumeSpecName: "kube-api-access-tqc8b") pod "8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" (UID: "8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae"). InnerVolumeSpecName "kube-api-access-tqc8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.832509 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b42468-3dc4-425d-ae7c-de59263bbf39-kube-api-access-szs7j" (OuterVolumeSpecName: "kube-api-access-szs7j") pod "51b42468-3dc4-425d-ae7c-de59263bbf39" (UID: "51b42468-3dc4-425d-ae7c-de59263bbf39"). InnerVolumeSpecName "kube-api-access-szs7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.834570 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabec9c1-d434-4382-87e9-c488658c02fe-kube-api-access-fcv9v" (OuterVolumeSpecName: "kube-api-access-fcv9v") pod "cabec9c1-d434-4382-87e9-c488658c02fe" (UID: "cabec9c1-d434-4382-87e9-c488658c02fe"). InnerVolumeSpecName "kube-api-access-fcv9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.848906 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cabec9c1-d434-4382-87e9-c488658c02fe" (UID: "cabec9c1-d434-4382-87e9-c488658c02fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.886483 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4699f1a8-9e55-49b6-a67f-f84bd256fa0f" (UID: "4699f1a8-9e55-49b6-a67f-f84bd256fa0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917229 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917274 4754 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917291 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917305 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szs7j\" (UniqueName: \"kubernetes.io/projected/51b42468-3dc4-425d-ae7c-de59263bbf39-kube-api-access-szs7j\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917320 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917336 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h7hm\" (UniqueName: \"kubernetes.io/projected/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-kube-api-access-5h7hm\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917352 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917367 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqc8b\" (UniqueName: \"kubernetes.io/projected/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae-kube-api-access-tqc8b\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917379 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4699f1a8-9e55-49b6-a67f-f84bd256fa0f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917394 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcv9v\" (UniqueName: \"kubernetes.io/projected/cabec9c1-d434-4382-87e9-c488658c02fe-kube-api-access-fcv9v\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.917406 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabec9c1-d434-4382-87e9-c488658c02fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.954055 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51b42468-3dc4-425d-ae7c-de59263bbf39" (UID: "51b42468-3dc4-425d-ae7c-de59263bbf39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.995204 4754 generic.go:334] "Generic (PLEG): container finished" podID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerID="99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da" exitCode=0 Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.995299 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerDied","Data":"99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da"} Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.995343 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqd9p" event={"ID":"51b42468-3dc4-425d-ae7c-de59263bbf39","Type":"ContainerDied","Data":"99f72b88150e585ba3b44b71829ef7041459cbc65ff0c85968a237c0f5e9ba21"} Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.995374 4754 scope.go:117] "RemoveContainer" containerID="99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da" Feb 18 19:23:56 crc kubenswrapper[4754]: I0218 19:23:56.995547 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqd9p" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.008733 4754 generic.go:334] "Generic (PLEG): container finished" podID="cabec9c1-d434-4382-87e9-c488658c02fe" containerID="2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9" exitCode=0 Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.008850 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2vnz" event={"ID":"cabec9c1-d434-4382-87e9-c488658c02fe","Type":"ContainerDied","Data":"2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.008902 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2vnz" event={"ID":"cabec9c1-d434-4382-87e9-c488658c02fe","Type":"ContainerDied","Data":"d62f67f0091026e4e15b210099841ac49e007d4acbffde65f973d782e28a3103"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.009014 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2vnz" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.017189 4754 generic.go:334] "Generic (PLEG): container finished" podID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerID="3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271" exitCode=0 Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.017270 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.017321 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" event={"ID":"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae","Type":"ContainerDied","Data":"3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.017389 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wljc4" event={"ID":"8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae","Type":"ContainerDied","Data":"3f8938b84e36163ce988f888822b6b4143cec45057a734bdca4e36742ebcc8ec"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.018250 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b42468-3dc4-425d-ae7c-de59263bbf39-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.022903 4754 generic.go:334] "Generic (PLEG): container finished" podID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerID="f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923" exitCode=0 Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.023242 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9nlb" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.023001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9nlb" event={"ID":"a91e02b9-77f2-4adc-8255-ef6dca75c2cf","Type":"ContainerDied","Data":"f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.023414 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9nlb" event={"ID":"a91e02b9-77f2-4adc-8255-ef6dca75c2cf","Type":"ContainerDied","Data":"be287a567d83c993343d142be4d6e70a903de9ca8b20dbf261de162e3b7b53e0"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.030910 4754 scope.go:117] "RemoveContainer" containerID="944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.031530 4754 generic.go:334] "Generic (PLEG): container finished" podID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerID="2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516" exitCode=0 Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.031613 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p9z2" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.031768 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p9z2" event={"ID":"4699f1a8-9e55-49b6-a67f-f84bd256fa0f","Type":"ContainerDied","Data":"2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.031911 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p9z2" event={"ID":"4699f1a8-9e55-49b6-a67f-f84bd256fa0f","Type":"ContainerDied","Data":"1a02ee229b0afc9302d9d4d284435808a2a27936f5d96f500e91a41e42a77736"} Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.044356 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqd9p"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.047562 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kqd9p"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.078001 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2vnz"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.086524 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2vnz"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.102509 4754 scope.go:117] "RemoveContainer" containerID="deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.113083 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wljc4"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.120376 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wljc4"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.131823 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x9nlb"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.136073 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x9nlb"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.141986 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6p9z2"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.145958 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6p9z2"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.148410 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxtt4"] Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.148929 4754 scope.go:117] "RemoveContainer" containerID="99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.149729 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da\": container with ID starting with 99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da not found: ID does not exist" containerID="99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.149796 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da"} err="failed to get container status \"99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da\": rpc error: code = NotFound desc = could not find container \"99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da\": container with ID starting with 99f5323af030d766e4947277a42d9b7ecfc9cdcdd8079f7a2d670b7f1e6b66da not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.149842 4754 scope.go:117] "RemoveContainer" containerID="944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.150621 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33\": container with ID starting with 944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33 not found: ID does not exist" containerID="944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.150672 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33"} err="failed to get container status \"944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33\": rpc error: code = NotFound desc = could not find container \"944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33\": container with ID starting with 944329832a2f0c86fb6b861ae90dfb007da824dbf41975e0295e3834dbc74d33 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.150713 4754 scope.go:117] "RemoveContainer" containerID="deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.151131 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b\": container with ID starting with deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b not found: ID does not exist" containerID="deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.151174 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b"} err="failed to get container status \"deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b\": rpc error: code = NotFound desc = could not find container \"deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b\": container with ID starting with deaeb149832699c1d29c64212f1a5dceeef7d5eaf6efa0f14e43970ba5afe27b not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.151200 4754 scope.go:117] "RemoveContainer" containerID="2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9" Feb 18 19:23:57 crc kubenswrapper[4754]: W0218 19:23:57.156076 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72accc0b_fc2a_42ee_9d17_cad3b19abed3.slice/crio-cba7a72709b6ddf0226e118ead6d1f7abc9b077d1c0f9e996927c5dfc210f249 WatchSource:0}: Error finding container cba7a72709b6ddf0226e118ead6d1f7abc9b077d1c0f9e996927c5dfc210f249: Status 404 returned error can't find the container with id cba7a72709b6ddf0226e118ead6d1f7abc9b077d1c0f9e996927c5dfc210f249 Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.189573 4754 scope.go:117] "RemoveContainer" containerID="5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.223912 4754 scope.go:117] "RemoveContainer" containerID="d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.258912 4754 scope.go:117] "RemoveContainer" containerID="2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.260988 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9\": container with ID starting with 2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9 not found: ID does not exist" containerID="2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.261043 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9"} err="failed to get container status \"2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9\": rpc error: code = NotFound desc = could not find container \"2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9\": container with ID starting with 2a39424223d27318f7b1d0ff56381c0969e63c845e5af658cf26d835cf2a7cd9 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.261078 4754 scope.go:117] "RemoveContainer" containerID="5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.261401 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc\": container with ID starting with 5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc not found: ID does not exist" containerID="5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.261432 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc"} err="failed to get container status \"5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc\": rpc error: code = NotFound desc = could not find container \"5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc\": container with ID starting with 5c6acf070d08397a42d85ebcc90b3428ea78d0257bfeb1f60ec469f71ee7ddfc not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.261454 4754 scope.go:117] "RemoveContainer" containerID="d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.261653 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056\": container with ID starting with d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056 not found: ID does not exist" containerID="d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.261678 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056"} err="failed to get container status \"d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056\": rpc error: code = NotFound desc = could not find container \"d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056\": container with ID starting with d98861ffc759ab6f545743e8e6461b58c6098364d3f1ac020b703614973a6056 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.261694 4754 scope.go:117] "RemoveContainer" containerID="3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.279061 4754 scope.go:117] "RemoveContainer" containerID="c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.300470 4754 scope.go:117] "RemoveContainer" containerID="3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.301693 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271\": container with ID starting with 3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271 not found: ID does not exist" containerID="3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.301763 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271"} err="failed to get container status \"3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271\": rpc error: code = NotFound desc = could not find container \"3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271\": container with ID starting with 3dfd8b4ff502d44cc1f5efac4120461e72872fa164a33b2fb93dc55bd3f38271 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.301837 4754 scope.go:117] "RemoveContainer" containerID="c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.302457 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede\": container with ID starting with c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede not found: ID does not exist" containerID="c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.302484 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede"} err="failed to get container status \"c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede\": rpc error: code = NotFound desc = could not find container \"c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede\": container with ID starting with c1417b0216f0ce836aed2bce7f71ab5e544ff02db1216c1292c6ac2b201bbede not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.302818 4754 scope.go:117] "RemoveContainer" containerID="f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.325417 4754 scope.go:117] "RemoveContainer" containerID="38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.340067 4754 scope.go:117] "RemoveContainer" containerID="246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.357272 4754 scope.go:117] "RemoveContainer" containerID="f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.357787 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923\": container with ID starting with f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923 not found: ID does not exist" containerID="f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.357838 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923"} err="failed to get container status \"f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923\": rpc error: code = NotFound desc = could not find container \"f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923\": container with ID starting with f76b7291b614c9eabdb7729d041274156b4e58a60d8ffd716cea2b82a6473923 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.357875 4754 scope.go:117] "RemoveContainer" containerID="38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.359842 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a\": container with ID starting with 38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a not found: ID does not exist" containerID="38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.359867 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a"} err="failed to get container status \"38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a\": rpc error: code = NotFound desc = could not find container \"38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a\": container with ID starting with 38da114372e1bf30546652957e40be2a6e7a2a77dfa7a98326ae20a897a2d79a not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.359889 4754 scope.go:117] "RemoveContainer" containerID="246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.360114 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984\": container with ID starting with 246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984 not found: ID does not exist" containerID="246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.360131 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984"} err="failed to get container status \"246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984\": rpc error: code = NotFound desc = could not find container \"246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984\": container with ID starting with 246196a9f70ec9b0cc356cfcc21dd10f5e0c69199ec619316384b42762bb3984 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.360153 4754 scope.go:117] "RemoveContainer" containerID="2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.375668 4754 scope.go:117] "RemoveContainer" containerID="58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.392439 4754 scope.go:117] "RemoveContainer" containerID="c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.423415 4754 scope.go:117] "RemoveContainer" containerID="2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.424502 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516\": container with ID starting with 2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516 not found: ID does not exist" containerID="2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.424567 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516"} err="failed to get container status \"2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516\": rpc error: code = NotFound desc = could not find container \"2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516\": container with ID starting with 2e37716ed0c3c206e0f8735547718eb6c86af1e25889c3a19b165bd8c72c0516 not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.424605 4754 scope.go:117] "RemoveContainer" containerID="58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.424957 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b\": container with ID starting with 58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b not found: ID does not exist" containerID="58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.424980 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b"} err="failed to get container status \"58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b\": rpc error: code = NotFound desc = could not find container \"58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b\": container with ID starting with 58ed7acd122dd7586e4c21fc5393be6cc7a7e84e56c9c3b803825dc6bf9b529b not found: ID does not exist" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.424994 4754 scope.go:117] "RemoveContainer" containerID="c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99" Feb 18 19:23:57 crc kubenswrapper[4754]: E0218 19:23:57.425272 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99\": container with ID starting with c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99 not found: ID does not exist" containerID="c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99" Feb 18 19:23:57 crc kubenswrapper[4754]: I0218 19:23:57.425292 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99"} err="failed to get container status \"c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99\": rpc error: code = NotFound desc = could not find container \"c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99\": container with ID starting with c68e6bfd4642152db923c9002c4b6f7953d98001febce556de4d68d3fd123d99 not found: ID does not exist" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.043058 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" event={"ID":"72accc0b-fc2a-42ee-9d17-cad3b19abed3","Type":"ContainerStarted","Data":"4b122f62a256295731ede6e32429048ae5a1d0b5f0e7e8975664542d47fb3ab4"} Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.043110 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" event={"ID":"72accc0b-fc2a-42ee-9d17-cad3b19abed3","Type":"ContainerStarted","Data":"cba7a72709b6ddf0226e118ead6d1f7abc9b077d1c0f9e996927c5dfc210f249"} Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.043556 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.048821 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.066133 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vxtt4" podStartSLOduration=3.066107506 podStartE2EDuration="3.066107506s" podCreationTimestamp="2026-02-18 19:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:58.062085326 +0000 UTC m=+340.512498122" watchObservedRunningTime="2026-02-18 19:23:58.066107506 +0000 UTC m=+340.516520302" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147293 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ndqg9"] Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147766 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147783 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147797 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147805 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147812 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147819 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147834 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147841 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147851 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147858 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147874 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147883 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147891 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147899 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="extract-utilities" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147909 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147916 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147924 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147932 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147940 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147947 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147955 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147962 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.147974 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.147985 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.148002 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148009 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="extract-content" Feb 18 19:23:58 crc kubenswrapper[4754]: E0218 19:23:58.148020 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148027 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148378 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148394 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148406 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148418 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148426 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" containerName="marketplace-operator" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.148437 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" containerName="registry-server" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.149776 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.152279 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.160442 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndqg9"] Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.216179 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4699f1a8-9e55-49b6-a67f-f84bd256fa0f" path="/var/lib/kubelet/pods/4699f1a8-9e55-49b6-a67f-f84bd256fa0f/volumes" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.216788 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b42468-3dc4-425d-ae7c-de59263bbf39" path="/var/lib/kubelet/pods/51b42468-3dc4-425d-ae7c-de59263bbf39/volumes" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.217427 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae" path="/var/lib/kubelet/pods/8ebe1fc4-b055-4fe3-b40e-d7286a80a4ae/volumes" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.218481 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91e02b9-77f2-4adc-8255-ef6dca75c2cf" path="/var/lib/kubelet/pods/a91e02b9-77f2-4adc-8255-ef6dca75c2cf/volumes" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.219035 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabec9c1-d434-4382-87e9-c488658c02fe" path="/var/lib/kubelet/pods/cabec9c1-d434-4382-87e9-c488658c02fe/volumes" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.238955 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8723e6-5c85-48bc-b6e2-428443f9db3d-utilities\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.238984 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8723e6-5c85-48bc-b6e2-428443f9db3d-catalog-content\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.239014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42kf\" (UniqueName: \"kubernetes.io/projected/fa8723e6-5c85-48bc-b6e2-428443f9db3d-kube-api-access-v42kf\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.340724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8723e6-5c85-48bc-b6e2-428443f9db3d-utilities\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.340789 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8723e6-5c85-48bc-b6e2-428443f9db3d-catalog-content\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.340848 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v42kf\" (UniqueName: \"kubernetes.io/projected/fa8723e6-5c85-48bc-b6e2-428443f9db3d-kube-api-access-v42kf\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.342056 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa8723e6-5c85-48bc-b6e2-428443f9db3d-utilities\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.342318 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa8723e6-5c85-48bc-b6e2-428443f9db3d-catalog-content\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.346011 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4knz"] Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.347279 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.349547 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.362372 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4knz"] Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.369513 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v42kf\" (UniqueName: \"kubernetes.io/projected/fa8723e6-5c85-48bc-b6e2-428443f9db3d-kube-api-access-v42kf\") pod \"redhat-marketplace-ndqg9\" (UID: \"fa8723e6-5c85-48bc-b6e2-428443f9db3d\") " pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.441812 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee6445-27da-41a5-94cc-1ef8635da6fe-utilities\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.441864 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee6445-27da-41a5-94cc-1ef8635da6fe-catalog-content\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.441912 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2s2s\" (UniqueName: \"kubernetes.io/projected/3dee6445-27da-41a5-94cc-1ef8635da6fe-kube-api-access-k2s2s\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.468980 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.543756 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee6445-27da-41a5-94cc-1ef8635da6fe-utilities\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.544352 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee6445-27da-41a5-94cc-1ef8635da6fe-catalog-content\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.544443 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2s2s\" (UniqueName: \"kubernetes.io/projected/3dee6445-27da-41a5-94cc-1ef8635da6fe-kube-api-access-k2s2s\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.544583 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dee6445-27da-41a5-94cc-1ef8635da6fe-utilities\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.544912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dee6445-27da-41a5-94cc-1ef8635da6fe-catalog-content\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.576590 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2s2s\" (UniqueName: \"kubernetes.io/projected/3dee6445-27da-41a5-94cc-1ef8635da6fe-kube-api-access-k2s2s\") pod \"redhat-operators-x4knz\" (UID: \"3dee6445-27da-41a5-94cc-1ef8635da6fe\") " pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.715966 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:23:58 crc kubenswrapper[4754]: I0218 19:23:58.898386 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndqg9"] Feb 18 19:23:58 crc kubenswrapper[4754]: W0218 19:23:58.912356 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa8723e6_5c85_48bc_b6e2_428443f9db3d.slice/crio-f1736800d0407e787dee875f2b6e5bc78e8a41367d7d8593ab9f38c5c2424d8d WatchSource:0}: Error finding container f1736800d0407e787dee875f2b6e5bc78e8a41367d7d8593ab9f38c5c2424d8d: Status 404 returned error can't find the container with id f1736800d0407e787dee875f2b6e5bc78e8a41367d7d8593ab9f38c5c2424d8d Feb 18 19:23:59 crc kubenswrapper[4754]: I0218 19:23:59.053161 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndqg9" event={"ID":"fa8723e6-5c85-48bc-b6e2-428443f9db3d","Type":"ContainerStarted","Data":"f1736800d0407e787dee875f2b6e5bc78e8a41367d7d8593ab9f38c5c2424d8d"} Feb 18 19:23:59 crc kubenswrapper[4754]: I0218 19:23:59.155740 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4knz"] Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.062134 4754 generic.go:334] "Generic (PLEG): container finished" podID="3dee6445-27da-41a5-94cc-1ef8635da6fe" containerID="7eca49a680a6aec40be059f4a754de20dfaaf2c778ca9659d9d743f701fd6229" exitCode=0 Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.062247 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4knz" event={"ID":"3dee6445-27da-41a5-94cc-1ef8635da6fe","Type":"ContainerDied","Data":"7eca49a680a6aec40be059f4a754de20dfaaf2c778ca9659d9d743f701fd6229"} Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.062286 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4knz" event={"ID":"3dee6445-27da-41a5-94cc-1ef8635da6fe","Type":"ContainerStarted","Data":"4660cf10e27ae05bfea759d3ccf8908678c8b3a6a6ab410a6683194a7ebafd12"} Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.064918 4754 generic.go:334] "Generic (PLEG): container finished" podID="fa8723e6-5c85-48bc-b6e2-428443f9db3d" containerID="38676d0f97043388b9c5a8a8f22f1cffaed9c84eb737cebec67408536f480b86" exitCode=0 Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.065257 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndqg9" event={"ID":"fa8723e6-5c85-48bc-b6e2-428443f9db3d","Type":"ContainerDied","Data":"38676d0f97043388b9c5a8a8f22f1cffaed9c84eb737cebec67408536f480b86"} Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.549544 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m4v47"] Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.550916 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.554273 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.558189 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4v47"] Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.669449 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c990196-74b9-4816-ae91-b5d469a4b3cb-utilities\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.669533 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c990196-74b9-4816-ae91-b5d469a4b3cb-catalog-content\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.669579 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xssh4\" (UniqueName: \"kubernetes.io/projected/1c990196-74b9-4816-ae91-b5d469a4b3cb-kube-api-access-xssh4\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.749310 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-46vpz"] Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.750679 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.753584 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.769379 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-46vpz"] Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.773265 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c990196-74b9-4816-ae91-b5d469a4b3cb-catalog-content\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.773341 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xssh4\" (UniqueName: \"kubernetes.io/projected/1c990196-74b9-4816-ae91-b5d469a4b3cb-kube-api-access-xssh4\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.773388 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c990196-74b9-4816-ae91-b5d469a4b3cb-utilities\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.773907 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c990196-74b9-4816-ae91-b5d469a4b3cb-utilities\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.786647 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c990196-74b9-4816-ae91-b5d469a4b3cb-catalog-content\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.806214 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xssh4\" (UniqueName: \"kubernetes.io/projected/1c990196-74b9-4816-ae91-b5d469a4b3cb-kube-api-access-xssh4\") pod \"certified-operators-m4v47\" (UID: \"1c990196-74b9-4816-ae91-b5d469a4b3cb\") " pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.870623 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.875268 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d89c6e-006c-4283-8446-09c5a4ce0d97-catalog-content\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.875355 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d89c6e-006c-4283-8446-09c5a4ce0d97-utilities\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.875523 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcr2\" (UniqueName: \"kubernetes.io/projected/23d89c6e-006c-4283-8446-09c5a4ce0d97-kube-api-access-dpcr2\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.977300 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d89c6e-006c-4283-8446-09c5a4ce0d97-catalog-content\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.977705 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d89c6e-006c-4283-8446-09c5a4ce0d97-utilities\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.977812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcr2\" (UniqueName: \"kubernetes.io/projected/23d89c6e-006c-4283-8446-09c5a4ce0d97-kube-api-access-dpcr2\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.977942 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d89c6e-006c-4283-8446-09c5a4ce0d97-catalog-content\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:00 crc kubenswrapper[4754]: I0218 19:24:00.978173 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d89c6e-006c-4283-8446-09c5a4ce0d97-utilities\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:01 crc kubenswrapper[4754]: I0218 19:24:01.016449 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcr2\" (UniqueName: \"kubernetes.io/projected/23d89c6e-006c-4283-8446-09c5a4ce0d97-kube-api-access-dpcr2\") pod \"community-operators-46vpz\" (UID: \"23d89c6e-006c-4283-8446-09c5a4ce0d97\") " pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:01 crc kubenswrapper[4754]: I0218 19:24:01.067070 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:01 crc kubenswrapper[4754]: I0218 19:24:01.332073 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4v47"] Feb 18 19:24:01 crc kubenswrapper[4754]: W0218 19:24:01.413285 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c990196_74b9_4816_ae91_b5d469a4b3cb.slice/crio-4a49da0aa67819149ab6410c6abb29bdaa01bcfd9a6af5ade245d68ca2b0f58b WatchSource:0}: Error finding container 4a49da0aa67819149ab6410c6abb29bdaa01bcfd9a6af5ade245d68ca2b0f58b: Status 404 returned error can't find the container with id 4a49da0aa67819149ab6410c6abb29bdaa01bcfd9a6af5ade245d68ca2b0f58b Feb 18 19:24:01 crc kubenswrapper[4754]: I0218 19:24:01.509454 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-46vpz"] Feb 18 19:24:01 crc kubenswrapper[4754]: W0218 19:24:01.514088 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23d89c6e_006c_4283_8446_09c5a4ce0d97.slice/crio-1dc384cb4baf0452c086efe1c30f78738fa474f1543c9a17321281443e39dd76 WatchSource:0}: Error finding container 1dc384cb4baf0452c086efe1c30f78738fa474f1543c9a17321281443e39dd76: Status 404 returned error can't find the container with id 1dc384cb4baf0452c086efe1c30f78738fa474f1543c9a17321281443e39dd76 Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.089741 4754 generic.go:334] "Generic (PLEG): container finished" podID="3dee6445-27da-41a5-94cc-1ef8635da6fe" containerID="7e3124d69d5dc4e8d991913c2f9825df9136ca4f9201f915dc683d46b53d242d" exitCode=0 Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.089829 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4knz" event={"ID":"3dee6445-27da-41a5-94cc-1ef8635da6fe","Type":"ContainerDied","Data":"7e3124d69d5dc4e8d991913c2f9825df9136ca4f9201f915dc683d46b53d242d"} Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.093155 4754 generic.go:334] "Generic (PLEG): container finished" podID="fa8723e6-5c85-48bc-b6e2-428443f9db3d" containerID="66d2b921eeca2fb62db9ff1a6fdda2c34825472b9377d73ea6cafb39531d1432" exitCode=0 Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.093277 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndqg9" event={"ID":"fa8723e6-5c85-48bc-b6e2-428443f9db3d","Type":"ContainerDied","Data":"66d2b921eeca2fb62db9ff1a6fdda2c34825472b9377d73ea6cafb39531d1432"} Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.095303 4754 generic.go:334] "Generic (PLEG): container finished" podID="23d89c6e-006c-4283-8446-09c5a4ce0d97" containerID="77b88f3d4594d57f9b518ef254682c539614d1ff547da9944a8de66759063ad8" exitCode=0 Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.095390 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46vpz" event={"ID":"23d89c6e-006c-4283-8446-09c5a4ce0d97","Type":"ContainerDied","Data":"77b88f3d4594d57f9b518ef254682c539614d1ff547da9944a8de66759063ad8"} Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.095475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46vpz" event={"ID":"23d89c6e-006c-4283-8446-09c5a4ce0d97","Type":"ContainerStarted","Data":"1dc384cb4baf0452c086efe1c30f78738fa474f1543c9a17321281443e39dd76"} Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.098869 4754 generic.go:334] "Generic (PLEG): container finished" podID="1c990196-74b9-4816-ae91-b5d469a4b3cb" containerID="99110b6c094f68b1dba5f99cc4faa1e876fff9bf960eee6cdb7ed115ae0257cf" exitCode=0 Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.098918 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4v47" event={"ID":"1c990196-74b9-4816-ae91-b5d469a4b3cb","Type":"ContainerDied","Data":"99110b6c094f68b1dba5f99cc4faa1e876fff9bf960eee6cdb7ed115ae0257cf"} Feb 18 19:24:02 crc kubenswrapper[4754]: I0218 19:24:02.098949 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4v47" event={"ID":"1c990196-74b9-4816-ae91-b5d469a4b3cb","Type":"ContainerStarted","Data":"4a49da0aa67819149ab6410c6abb29bdaa01bcfd9a6af5ade245d68ca2b0f58b"} Feb 18 19:24:03 crc kubenswrapper[4754]: I0218 19:24:03.109620 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4knz" event={"ID":"3dee6445-27da-41a5-94cc-1ef8635da6fe","Type":"ContainerStarted","Data":"955f871f92a26c526e728b8f70f068c0779dfc5339bb003ee7427485c6bf8fab"} Feb 18 19:24:03 crc kubenswrapper[4754]: I0218 19:24:03.115635 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndqg9" event={"ID":"fa8723e6-5c85-48bc-b6e2-428443f9db3d","Type":"ContainerStarted","Data":"993a0986c45e94f9dfa68024395f34891e0e5ea24bf231b9efea04133e6559c8"} Feb 18 19:24:03 crc kubenswrapper[4754]: I0218 19:24:03.132239 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4knz" podStartSLOduration=2.700494497 podStartE2EDuration="5.132219713s" podCreationTimestamp="2026-02-18 19:23:58 +0000 UTC" firstStartedPulling="2026-02-18 19:24:00.064476849 +0000 UTC m=+342.514889655" lastFinishedPulling="2026-02-18 19:24:02.496202075 +0000 UTC m=+344.946614871" observedRunningTime="2026-02-18 19:24:03.129613715 +0000 UTC m=+345.580026501" watchObservedRunningTime="2026-02-18 19:24:03.132219713 +0000 UTC m=+345.582632509" Feb 18 19:24:03 crc kubenswrapper[4754]: I0218 19:24:03.158574 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ndqg9" podStartSLOduration=2.658821641 podStartE2EDuration="5.158553995s" podCreationTimestamp="2026-02-18 19:23:58 +0000 UTC" firstStartedPulling="2026-02-18 19:24:00.067331494 +0000 UTC m=+342.517744300" lastFinishedPulling="2026-02-18 19:24:02.567063858 +0000 UTC m=+345.017476654" observedRunningTime="2026-02-18 19:24:03.155466603 +0000 UTC m=+345.605879399" watchObservedRunningTime="2026-02-18 19:24:03.158553995 +0000 UTC m=+345.608966791" Feb 18 19:24:04 crc kubenswrapper[4754]: I0218 19:24:04.126517 4754 generic.go:334] "Generic (PLEG): container finished" podID="23d89c6e-006c-4283-8446-09c5a4ce0d97" containerID="a106273acdf18a4bf5be0095237b2432fe1f735f144cfaec15c40e5e726f2bb9" exitCode=0 Feb 18 19:24:04 crc kubenswrapper[4754]: I0218 19:24:04.126727 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46vpz" event={"ID":"23d89c6e-006c-4283-8446-09c5a4ce0d97","Type":"ContainerDied","Data":"a106273acdf18a4bf5be0095237b2432fe1f735f144cfaec15c40e5e726f2bb9"} Feb 18 19:24:04 crc kubenswrapper[4754]: I0218 19:24:04.129833 4754 generic.go:334] "Generic (PLEG): container finished" podID="1c990196-74b9-4816-ae91-b5d469a4b3cb" containerID="c78db7e810efb62ac3c20c6f88826bbd588a598e1d8d0a5e0f2ce7c5126157f1" exitCode=0 Feb 18 19:24:04 crc kubenswrapper[4754]: I0218 19:24:04.129891 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4v47" event={"ID":"1c990196-74b9-4816-ae91-b5d469a4b3cb","Type":"ContainerDied","Data":"c78db7e810efb62ac3c20c6f88826bbd588a598e1d8d0a5e0f2ce7c5126157f1"} Feb 18 19:24:05 crc kubenswrapper[4754]: I0218 19:24:05.139390 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-46vpz" event={"ID":"23d89c6e-006c-4283-8446-09c5a4ce0d97","Type":"ContainerStarted","Data":"85aa5c4680d34f7d261d5451e06a48396cb173b91c178bc0d679de766834dcd0"} Feb 18 19:24:05 crc kubenswrapper[4754]: I0218 19:24:05.159001 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-46vpz" podStartSLOduration=2.73319539 podStartE2EDuration="5.158968279s" podCreationTimestamp="2026-02-18 19:24:00 +0000 UTC" firstStartedPulling="2026-02-18 19:24:02.097547693 +0000 UTC m=+344.547960489" lastFinishedPulling="2026-02-18 19:24:04.523320582 +0000 UTC m=+346.973733378" observedRunningTime="2026-02-18 19:24:05.158848975 +0000 UTC m=+347.609261781" watchObservedRunningTime="2026-02-18 19:24:05.158968279 +0000 UTC m=+347.609381085" Feb 18 19:24:05 crc kubenswrapper[4754]: I0218 19:24:05.159191 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4v47" event={"ID":"1c990196-74b9-4816-ae91-b5d469a4b3cb","Type":"ContainerStarted","Data":"a3ff535341f0737c1163dd0cc889e0905c26ff267b3d39c6b887a30eccfbcae9"} Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.096876 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.097797 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.469770 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.469932 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.520800 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.542050 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m4v47" podStartSLOduration=6.14382132 podStartE2EDuration="8.542030732s" podCreationTimestamp="2026-02-18 19:24:00 +0000 UTC" firstStartedPulling="2026-02-18 19:24:02.101270843 +0000 UTC m=+344.551683639" lastFinishedPulling="2026-02-18 19:24:04.499480255 +0000 UTC m=+346.949893051" observedRunningTime="2026-02-18 19:24:05.183712433 +0000 UTC m=+347.634125229" watchObservedRunningTime="2026-02-18 19:24:08.542030732 +0000 UTC m=+350.992443528" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.717092 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.717176 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:24:08 crc kubenswrapper[4754]: I0218 19:24:08.763288 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:24:09 crc kubenswrapper[4754]: I0218 19:24:09.232348 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4knz" Feb 18 19:24:09 crc kubenswrapper[4754]: I0218 19:24:09.236134 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ndqg9" Feb 18 19:24:10 crc kubenswrapper[4754]: I0218 19:24:10.871624 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:10 crc kubenswrapper[4754]: I0218 19:24:10.872003 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:10 crc kubenswrapper[4754]: I0218 19:24:10.914295 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:11 crc kubenswrapper[4754]: I0218 19:24:11.035733 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-c6m7k" Feb 18 19:24:11 crc kubenswrapper[4754]: I0218 19:24:11.070182 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:11 crc kubenswrapper[4754]: I0218 19:24:11.070491 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:11 crc kubenswrapper[4754]: I0218 19:24:11.102643 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qcqwx"] Feb 18 19:24:11 crc kubenswrapper[4754]: I0218 19:24:11.195207 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:11 crc kubenswrapper[4754]: I0218 19:24:11.259836 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m4v47" Feb 18 19:24:12 crc kubenswrapper[4754]: I0218 19:24:12.251730 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-46vpz" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.135752 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" podUID="a00863d2-1742-42b7-a47e-beef12e21834" containerName="registry" containerID="cri-o://003f5e45a83da7edaf2296110784775e03894de115ee60a31635a67cf1ffff9a" gracePeriod=30 Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.351512 4754 generic.go:334] "Generic (PLEG): container finished" podID="a00863d2-1742-42b7-a47e-beef12e21834" containerID="003f5e45a83da7edaf2296110784775e03894de115ee60a31635a67cf1ffff9a" exitCode=0 Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.351603 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" event={"ID":"a00863d2-1742-42b7-a47e-beef12e21834","Type":"ContainerDied","Data":"003f5e45a83da7edaf2296110784775e03894de115ee60a31635a67cf1ffff9a"} Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.564283 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735319 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-registry-tls\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735414 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a00863d2-1742-42b7-a47e-beef12e21834-installation-pull-secrets\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735506 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-bound-sa-token\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735550 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns67r\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-kube-api-access-ns67r\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735593 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a00863d2-1742-42b7-a47e-beef12e21834-ca-trust-extracted\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735640 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-trusted-ca\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735849 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.735895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-registry-certificates\") pod \"a00863d2-1742-42b7-a47e-beef12e21834\" (UID: \"a00863d2-1742-42b7-a47e-beef12e21834\") " Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.736375 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.738936 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.742677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.743769 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.744232 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-kube-api-access-ns67r" (OuterVolumeSpecName: "kube-api-access-ns67r") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "kube-api-access-ns67r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.760076 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00863d2-1742-42b7-a47e-beef12e21834-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.760761 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a00863d2-1742-42b7-a47e-beef12e21834-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.764623 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a00863d2-1742-42b7-a47e-beef12e21834" (UID: "a00863d2-1742-42b7-a47e-beef12e21834"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841247 4754 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841302 4754 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a00863d2-1742-42b7-a47e-beef12e21834-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841322 4754 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841338 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns67r\" (UniqueName: \"kubernetes.io/projected/a00863d2-1742-42b7-a47e-beef12e21834-kube-api-access-ns67r\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841351 4754 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a00863d2-1742-42b7-a47e-beef12e21834-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841362 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:36 crc kubenswrapper[4754]: I0218 19:24:36.841375 4754 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a00863d2-1742-42b7-a47e-beef12e21834-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:37 crc kubenswrapper[4754]: I0218 19:24:37.359510 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" event={"ID":"a00863d2-1742-42b7-a47e-beef12e21834","Type":"ContainerDied","Data":"6c50848b71a4da224a2f06010c1115df88f0b9708904e55f278a3faa818dc039"} Feb 18 19:24:37 crc kubenswrapper[4754]: I0218 19:24:37.359582 4754 scope.go:117] "RemoveContainer" containerID="003f5e45a83da7edaf2296110784775e03894de115ee60a31635a67cf1ffff9a" Feb 18 19:24:37 crc kubenswrapper[4754]: I0218 19:24:37.359603 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-qcqwx" Feb 18 19:24:37 crc kubenswrapper[4754]: I0218 19:24:37.401264 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qcqwx"] Feb 18 19:24:37 crc kubenswrapper[4754]: I0218 19:24:37.410643 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-qcqwx"] Feb 18 19:24:38 crc kubenswrapper[4754]: I0218 19:24:38.097033 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:24:38 crc kubenswrapper[4754]: I0218 19:24:38.097116 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:24:38 crc kubenswrapper[4754]: I0218 19:24:38.221360 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00863d2-1742-42b7-a47e-beef12e21834" path="/var/lib/kubelet/pods/a00863d2-1742-42b7-a47e-beef12e21834/volumes" Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.097037 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.098021 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.098107 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.099106 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15a5a6adf9ccd125edeebc5ca9a6166993061dd39a65b3a2573a64c360c0c83d"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.099265 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://15a5a6adf9ccd125edeebc5ca9a6166993061dd39a65b3a2573a64c360c0c83d" gracePeriod=600 Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.567955 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="15a5a6adf9ccd125edeebc5ca9a6166993061dd39a65b3a2573a64c360c0c83d" exitCode=0 Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.568082 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"15a5a6adf9ccd125edeebc5ca9a6166993061dd39a65b3a2573a64c360c0c83d"} Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.568648 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"d68869f95ccdc0f0ce1d0a9d01eddff3e864ac626357691b0e582cc755bc82f8"} Feb 18 19:25:08 crc kubenswrapper[4754]: I0218 19:25:08.568683 4754 scope.go:117] "RemoveContainer" containerID="bd6ee3885fe705fa218abcaadf7212672ea70d1d586f21634588ba9d5c427641" Feb 18 19:27:08 crc kubenswrapper[4754]: I0218 19:27:08.096837 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:27:08 crc kubenswrapper[4754]: I0218 19:27:08.097552 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:27:38 crc kubenswrapper[4754]: I0218 19:27:38.096400 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:27:38 crc kubenswrapper[4754]: I0218 19:27:38.097036 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.096393 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.097363 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.097440 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.098394 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d68869f95ccdc0f0ce1d0a9d01eddff3e864ac626357691b0e582cc755bc82f8"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.098478 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://d68869f95ccdc0f0ce1d0a9d01eddff3e864ac626357691b0e582cc755bc82f8" gracePeriod=600 Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.851860 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="d68869f95ccdc0f0ce1d0a9d01eddff3e864ac626357691b0e582cc755bc82f8" exitCode=0 Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.851931 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"d68869f95ccdc0f0ce1d0a9d01eddff3e864ac626357691b0e582cc755bc82f8"} Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.852454 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"e80755a090c368aaa1f52b9e1d9b61931048fa366f94c61a6b1fb41f5ef0c6f5"} Feb 18 19:28:08 crc kubenswrapper[4754]: I0218 19:28:08.852492 4754 scope.go:117] "RemoveContainer" containerID="15a5a6adf9ccd125edeebc5ca9a6166993061dd39a65b3a2573a64c360c0c83d" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.468958 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf"] Feb 18 19:29:24 crc kubenswrapper[4754]: E0218 19:29:24.470027 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00863d2-1742-42b7-a47e-beef12e21834" containerName="registry" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.470048 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00863d2-1742-42b7-a47e-beef12e21834" containerName="registry" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.470262 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00863d2-1742-42b7-a47e-beef12e21834" containerName="registry" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.470921 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.473597 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mw7kb"] Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.474187 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mw7kb" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.474286 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.474438 4754 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wjz59" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.474633 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.477131 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf"] Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.478374 4754 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2w6nl" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.492390 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mw7kb"] Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.508555 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tdmfp"] Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.510838 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.515073 4754 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bm4wc" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.529372 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tdmfp"] Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.588437 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtnb\" (UniqueName: \"kubernetes.io/projected/55acb9ab-e688-4187-a815-10fe93ea36a5-kube-api-access-lwtnb\") pod \"cert-manager-cainjector-cf98fcc89-kl7gf\" (UID: \"55acb9ab-e688-4187-a815-10fe93ea36a5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.588602 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqk7\" (UniqueName: \"kubernetes.io/projected/664186ee-1c4f-4002-b698-4950a0c864af-kube-api-access-wtqk7\") pod \"cert-manager-858654f9db-mw7kb\" (UID: \"664186ee-1c4f-4002-b698-4950a0c864af\") " pod="cert-manager/cert-manager-858654f9db-mw7kb" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.690194 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqk7\" (UniqueName: \"kubernetes.io/projected/664186ee-1c4f-4002-b698-4950a0c864af-kube-api-access-wtqk7\") pod \"cert-manager-858654f9db-mw7kb\" (UID: \"664186ee-1c4f-4002-b698-4950a0c864af\") " pod="cert-manager/cert-manager-858654f9db-mw7kb" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.690372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjmmt\" (UniqueName: \"kubernetes.io/projected/57a849ad-38ab-47a3-9d27-8a09850ae75f-kube-api-access-cjmmt\") pod \"cert-manager-webhook-687f57d79b-tdmfp\" (UID: \"57a849ad-38ab-47a3-9d27-8a09850ae75f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.690501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtnb\" (UniqueName: \"kubernetes.io/projected/55acb9ab-e688-4187-a815-10fe93ea36a5-kube-api-access-lwtnb\") pod \"cert-manager-cainjector-cf98fcc89-kl7gf\" (UID: \"55acb9ab-e688-4187-a815-10fe93ea36a5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.711301 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtnb\" (UniqueName: \"kubernetes.io/projected/55acb9ab-e688-4187-a815-10fe93ea36a5-kube-api-access-lwtnb\") pod \"cert-manager-cainjector-cf98fcc89-kl7gf\" (UID: \"55acb9ab-e688-4187-a815-10fe93ea36a5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.713249 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqk7\" (UniqueName: \"kubernetes.io/projected/664186ee-1c4f-4002-b698-4950a0c864af-kube-api-access-wtqk7\") pod \"cert-manager-858654f9db-mw7kb\" (UID: \"664186ee-1c4f-4002-b698-4950a0c864af\") " pod="cert-manager/cert-manager-858654f9db-mw7kb" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.792358 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjmmt\" (UniqueName: \"kubernetes.io/projected/57a849ad-38ab-47a3-9d27-8a09850ae75f-kube-api-access-cjmmt\") pod \"cert-manager-webhook-687f57d79b-tdmfp\" (UID: \"57a849ad-38ab-47a3-9d27-8a09850ae75f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.798251 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.809812 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mw7kb" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.812540 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjmmt\" (UniqueName: \"kubernetes.io/projected/57a849ad-38ab-47a3-9d27-8a09850ae75f-kube-api-access-cjmmt\") pod \"cert-manager-webhook-687f57d79b-tdmfp\" (UID: \"57a849ad-38ab-47a3-9d27-8a09850ae75f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:24 crc kubenswrapper[4754]: I0218 19:29:24.828097 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.175372 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-tdmfp"] Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.187321 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.328424 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf"] Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.331758 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mw7kb"] Feb 18 19:29:25 crc kubenswrapper[4754]: W0218 19:29:25.332520 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55acb9ab_e688_4187_a815_10fe93ea36a5.slice/crio-cb3bbe3c2a7b36a9e80d2e023cb17221085da5e9e4239dcc40ffecf30e2fb032 WatchSource:0}: Error finding container cb3bbe3c2a7b36a9e80d2e023cb17221085da5e9e4239dcc40ffecf30e2fb032: Status 404 returned error can't find the container with id cb3bbe3c2a7b36a9e80d2e023cb17221085da5e9e4239dcc40ffecf30e2fb032 Feb 18 19:29:25 crc kubenswrapper[4754]: W0218 19:29:25.335203 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod664186ee_1c4f_4002_b698_4950a0c864af.slice/crio-cfd7333c73cb90b703c9109e20fa07dfc197f15f17b2ddc6cc0e30259dc3d3a2 WatchSource:0}: Error finding container cfd7333c73cb90b703c9109e20fa07dfc197f15f17b2ddc6cc0e30259dc3d3a2: Status 404 returned error can't find the container with id cfd7333c73cb90b703c9109e20fa07dfc197f15f17b2ddc6cc0e30259dc3d3a2 Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.378841 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" event={"ID":"55acb9ab-e688-4187-a815-10fe93ea36a5","Type":"ContainerStarted","Data":"cb3bbe3c2a7b36a9e80d2e023cb17221085da5e9e4239dcc40ffecf30e2fb032"} Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.380057 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mw7kb" event={"ID":"664186ee-1c4f-4002-b698-4950a0c864af","Type":"ContainerStarted","Data":"cfd7333c73cb90b703c9109e20fa07dfc197f15f17b2ddc6cc0e30259dc3d3a2"} Feb 18 19:29:25 crc kubenswrapper[4754]: I0218 19:29:25.380862 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" event={"ID":"57a849ad-38ab-47a3-9d27-8a09850ae75f","Type":"ContainerStarted","Data":"478040e047c658ee8956d8284c07b228343fcc9e3d3580842df5c7dbab843d54"} Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.409154 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" event={"ID":"55acb9ab-e688-4187-a815-10fe93ea36a5","Type":"ContainerStarted","Data":"25a5755bc1e49afb354cbdfc4cc42752279be3e82fe82f40d3001842ecd10221"} Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.411934 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mw7kb" event={"ID":"664186ee-1c4f-4002-b698-4950a0c864af","Type":"ContainerStarted","Data":"e5a33304122b0a7551a138b7d59d07a36338169264bff5daea2cb2f76125eed8"} Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.413930 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" event={"ID":"57a849ad-38ab-47a3-9d27-8a09850ae75f","Type":"ContainerStarted","Data":"3b5808bded3229a1119ae1245f286a14c77dc90c3ce4fe95fd5ca91fb1f54d50"} Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.414116 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.438615 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kl7gf" podStartSLOduration=1.807133954 podStartE2EDuration="5.438594401s" podCreationTimestamp="2026-02-18 19:29:24 +0000 UTC" firstStartedPulling="2026-02-18 19:29:25.335583588 +0000 UTC m=+667.785996384" lastFinishedPulling="2026-02-18 19:29:28.967044035 +0000 UTC m=+671.417456831" observedRunningTime="2026-02-18 19:29:29.435092631 +0000 UTC m=+671.885505467" watchObservedRunningTime="2026-02-18 19:29:29.438594401 +0000 UTC m=+671.889007197" Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.478664 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mw7kb" podStartSLOduration=1.628190252 podStartE2EDuration="5.478632991s" podCreationTimestamp="2026-02-18 19:29:24 +0000 UTC" firstStartedPulling="2026-02-18 19:29:25.337157807 +0000 UTC m=+667.787570603" lastFinishedPulling="2026-02-18 19:29:29.187600536 +0000 UTC m=+671.638013342" observedRunningTime="2026-02-18 19:29:29.477702041 +0000 UTC m=+671.928114857" watchObservedRunningTime="2026-02-18 19:29:29.478632991 +0000 UTC m=+671.929045817" Feb 18 19:29:29 crc kubenswrapper[4754]: I0218 19:29:29.501347 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" podStartSLOduration=2.433437417 podStartE2EDuration="5.50132152s" podCreationTimestamp="2026-02-18 19:29:24 +0000 UTC" firstStartedPulling="2026-02-18 19:29:25.187085151 +0000 UTC m=+667.637497947" lastFinishedPulling="2026-02-18 19:29:28.254969254 +0000 UTC m=+670.705382050" observedRunningTime="2026-02-18 19:29:29.496754854 +0000 UTC m=+671.947167650" watchObservedRunningTime="2026-02-18 19:29:29.50132152 +0000 UTC m=+671.951734316" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.339402 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glx55"] Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.340803 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-controller" containerID="cri-o://9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.340996 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="sbdb" containerID="cri-o://cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.341081 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="nbdb" containerID="cri-o://ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.341125 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="northd" containerID="cri-o://dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.341232 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-acl-logging" containerID="cri-o://b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.341208 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-node" containerID="cri-o://2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.341290 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.376539 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" containerID="cri-o://f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" gracePeriod=30 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.462485 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/2.log" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.463313 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/1.log" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.464121 4754 generic.go:334] "Generic (PLEG): container finished" podID="55244610-cf2e-4b72-b8b7-9d55898fbb62" containerID="fa5805441467198b1c86089bf816b3cb2a9e7b35ed917649659cc4f52c6e1b00" exitCode=2 Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.464296 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerDied","Data":"fa5805441467198b1c86089bf816b3cb2a9e7b35ed917649659cc4f52c6e1b00"} Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.464416 4754 scope.go:117] "RemoveContainer" containerID="1527f77f3016297e8b5250f9098c4049afcc33b06d7b6a5378f753a3870608a6" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.465234 4754 scope.go:117] "RemoveContainer" containerID="fa5805441467198b1c86089bf816b3cb2a9e7b35ed917649659cc4f52c6e1b00" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.465512 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-pp2q2_openshift-multus(55244610-cf2e-4b72-b8b7-9d55898fbb62)\"" pod="openshift-multus/multus-pp2q2" podUID="55244610-cf2e-4b72-b8b7-9d55898fbb62" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.678044 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/3.log" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.682652 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovn-acl-logging/0.log" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.683821 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovn-controller/0.log" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.684627 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750347 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpsvw\" (UniqueName: \"kubernetes.io/projected/82e5683f-ada7-4578-a6e3-6f0dd72dd149-kube-api-access-rpsvw\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750392 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-openvswitch\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750412 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-systemd-units\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750435 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-etc-openvswitch\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750467 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-netd\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750488 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-script-lib\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750507 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-ovn\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750528 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-netns\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750544 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-var-lib-openvswitch\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750563 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-systemd\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750581 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-config\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750595 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-log-socket\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750616 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovn-node-metrics-cert\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750642 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-env-overrides\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750671 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-slash\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750691 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-bin\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750708 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-node-log\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750731 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-ovn-kubernetes\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750759 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-var-lib-cni-networks-ovn-kubernetes\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750775 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-kubelet\") pod \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\" (UID: \"82e5683f-ada7-4578-a6e3-6f0dd72dd149\") " Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.750939 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.751356 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.751739 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nx9db"] Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752037 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752044 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-slash" (OuterVolumeSpecName: "host-slash") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752064 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-log-socket" (OuterVolumeSpecName: "log-socket") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752273 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752511 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752594 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752630 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752680 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752690 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752078 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752731 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752749 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752763 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752782 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752794 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752803 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752837 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-acl-logging" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752849 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-acl-logging" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752862 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kubecfg-setup" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752871 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kubecfg-setup" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752882 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="northd" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752890 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="northd" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752903 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="sbdb" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752910 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="sbdb" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752922 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752930 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752939 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-node" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752947 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-node" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752965 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752973 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.752987 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752996 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.753017 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="nbdb" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753025 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="nbdb" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753291 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="sbdb" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753306 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-acl-logging" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753315 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753325 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753344 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753353 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="kube-rbac-proxy-node" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753362 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="northd" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753371 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753379 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovn-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753389 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="nbdb" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753369 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.753517 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753529 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: E0218 19:29:34.753540 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753547 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753658 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.753668 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerName="ovnkube-controller" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.752565 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-node-log" (OuterVolumeSpecName: "node-log") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.756049 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.761751 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.762606 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e5683f-ada7-4578-a6e3-6f0dd72dd149-kube-api-access-rpsvw" (OuterVolumeSpecName: "kube-api-access-rpsvw") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "kube-api-access-rpsvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.768279 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "82e5683f-ada7-4578-a6e3-6f0dd72dd149" (UID: "82e5683f-ada7-4578-a6e3-6f0dd72dd149"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.832361 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.853469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-ovnkube-config\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.853530 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-cni-bin\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.853565 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-systemd-units\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.853611 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-var-lib-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.853638 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-ovn\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.853722 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-slash\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.854803 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.854868 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-cni-netd\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.854900 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrnj\" (UniqueName: \"kubernetes.io/projected/36336a87-7152-4256-8325-1f1b9a78ff60-kube-api-access-mwrnj\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.854951 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-etc-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855018 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-env-overrides\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36336a87-7152-4256-8325-1f1b9a78ff60-ovn-node-metrics-cert\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855130 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-systemd\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855291 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855405 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-node-log\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855463 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-log-socket\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855496 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-run-netns\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855543 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-kubelet\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855611 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-run-ovn-kubernetes\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855672 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-ovnkube-script-lib\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855745 4754 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855771 4754 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855785 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855799 4754 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855812 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855827 4754 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855841 4754 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855854 4754 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855867 4754 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855880 4754 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855894 4754 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855907 4754 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855922 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpsvw\" (UniqueName: \"kubernetes.io/projected/82e5683f-ada7-4578-a6e3-6f0dd72dd149-kube-api-access-rpsvw\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855936 4754 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855948 4754 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855960 4754 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855973 4754 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855987 4754 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82e5683f-ada7-4578-a6e3-6f0dd72dd149-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.855999 4754 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.856012 4754 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82e5683f-ada7-4578-a6e3-6f0dd72dd149-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.957265 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-ovnkube-config\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.957707 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-cni-bin\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.957812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-systemd-units\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.957938 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-var-lib-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958078 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-ovn\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958194 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-ovnkube-config\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958273 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-systemd-units\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958314 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-cni-bin\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958349 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-var-lib-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958380 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-ovn\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958276 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-slash\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958584 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958692 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-cni-netd\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958781 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrnj\" (UniqueName: \"kubernetes.io/projected/36336a87-7152-4256-8325-1f1b9a78ff60-kube-api-access-mwrnj\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958866 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-etc-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958957 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-env-overrides\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959043 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36336a87-7152-4256-8325-1f1b9a78ff60-ovn-node-metrics-cert\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959131 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-systemd\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959252 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959360 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-node-log\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959454 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-log-socket\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959550 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-run-netns\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959659 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-kubelet\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959760 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-run-ovn-kubernetes\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.959863 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-ovnkube-script-lib\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.960637 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-ovnkube-script-lib\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.958486 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-slash\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.960845 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.960976 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-cni-netd\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.961551 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-etc-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.962114 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36336a87-7152-4256-8325-1f1b9a78ff60-env-overrides\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.962870 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-log-socket\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.962935 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-systemd\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.962969 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-run-openvswitch\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.962999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-node-log\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.963031 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-kubelet\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.963063 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-run-netns\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.963094 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/36336a87-7152-4256-8325-1f1b9a78ff60-host-run-ovn-kubernetes\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.969889 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36336a87-7152-4256-8325-1f1b9a78ff60-ovn-node-metrics-cert\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:34 crc kubenswrapper[4754]: I0218 19:29:34.983736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrnj\" (UniqueName: \"kubernetes.io/projected/36336a87-7152-4256-8325-1f1b9a78ff60-kube-api-access-mwrnj\") pod \"ovnkube-node-nx9db\" (UID: \"36336a87-7152-4256-8325-1f1b9a78ff60\") " pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.071975 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.472031 4754 generic.go:334] "Generic (PLEG): container finished" podID="36336a87-7152-4256-8325-1f1b9a78ff60" containerID="8473ae28211daa918212db1e27e0803541865fc8dec1b8cabee5463f55e8c2a5" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.472111 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerDied","Data":"8473ae28211daa918212db1e27e0803541865fc8dec1b8cabee5463f55e8c2a5"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.472216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"f95b41bd1ab73b2fdb3d456c6921c79655ca7db92efb6c6de48abcb73ab2e74f"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.474776 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/2.log" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.478190 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovnkube-controller/3.log" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.486771 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovn-acl-logging/0.log" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.487590 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-glx55_82e5683f-ada7-4578-a6e3-6f0dd72dd149/ovn-controller/0.log" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488060 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488089 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488098 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488108 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488118 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488126 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" exitCode=0 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488155 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" exitCode=143 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488164 4754 generic.go:334] "Generic (PLEG): container finished" podID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" exitCode=143 Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488189 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488219 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488231 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488240 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488253 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488275 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488287 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488293 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488300 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488305 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488310 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488312 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488323 4754 scope.go:117] "RemoveContainer" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488315 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488407 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488414 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488425 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488436 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488444 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488450 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488457 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488464 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488471 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488479 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488485 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488490 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488496 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488503 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488512 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488519 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488524 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488530 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488535 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488541 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488546 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488551 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488557 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488562 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488570 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-glx55" event={"ID":"82e5683f-ada7-4578-a6e3-6f0dd72dd149","Type":"ContainerDied","Data":"5274bd996f203fc6c66de41bb98371f580b753a65d6bb819bf202865f4f96db6"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488578 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488584 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488591 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488596 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488602 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488607 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488613 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488619 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488624 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.488630 4754 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.556697 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.583939 4754 scope.go:117] "RemoveContainer" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.597314 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glx55"] Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.598693 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-glx55"] Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.614760 4754 scope.go:117] "RemoveContainer" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.632332 4754 scope.go:117] "RemoveContainer" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.660354 4754 scope.go:117] "RemoveContainer" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.679207 4754 scope.go:117] "RemoveContainer" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.709656 4754 scope.go:117] "RemoveContainer" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.732072 4754 scope.go:117] "RemoveContainer" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.757729 4754 scope.go:117] "RemoveContainer" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.791343 4754 scope.go:117] "RemoveContainer" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.791883 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": container with ID starting with f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211 not found: ID does not exist" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.791927 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} err="failed to get container status \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": rpc error: code = NotFound desc = could not find container \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": container with ID starting with f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.791961 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.792515 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": container with ID starting with e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810 not found: ID does not exist" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.792542 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} err="failed to get container status \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": rpc error: code = NotFound desc = could not find container \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": container with ID starting with e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.792558 4754 scope.go:117] "RemoveContainer" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.792866 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": container with ID starting with cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2 not found: ID does not exist" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.792897 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} err="failed to get container status \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": rpc error: code = NotFound desc = could not find container \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": container with ID starting with cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.792916 4754 scope.go:117] "RemoveContainer" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.793252 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": container with ID starting with ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4 not found: ID does not exist" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.793278 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} err="failed to get container status \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": rpc error: code = NotFound desc = could not find container \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": container with ID starting with ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.793291 4754 scope.go:117] "RemoveContainer" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.793577 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": container with ID starting with dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f not found: ID does not exist" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.793614 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} err="failed to get container status \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": rpc error: code = NotFound desc = could not find container \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": container with ID starting with dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.793632 4754 scope.go:117] "RemoveContainer" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.793969 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": container with ID starting with 6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168 not found: ID does not exist" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.793989 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} err="failed to get container status \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": rpc error: code = NotFound desc = could not find container \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": container with ID starting with 6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.794007 4754 scope.go:117] "RemoveContainer" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.794258 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": container with ID starting with 2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d not found: ID does not exist" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.794284 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} err="failed to get container status \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": rpc error: code = NotFound desc = could not find container \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": container with ID starting with 2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.794314 4754 scope.go:117] "RemoveContainer" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.794824 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": container with ID starting with b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed not found: ID does not exist" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.794922 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} err="failed to get container status \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": rpc error: code = NotFound desc = could not find container \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": container with ID starting with b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.794988 4754 scope.go:117] "RemoveContainer" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.795453 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": container with ID starting with 9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2 not found: ID does not exist" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.795483 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} err="failed to get container status \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": rpc error: code = NotFound desc = could not find container \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": container with ID starting with 9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.795501 4754 scope.go:117] "RemoveContainer" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" Feb 18 19:29:35 crc kubenswrapper[4754]: E0218 19:29:35.795880 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": container with ID starting with 4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b not found: ID does not exist" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.795946 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} err="failed to get container status \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": rpc error: code = NotFound desc = could not find container \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": container with ID starting with 4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.795988 4754 scope.go:117] "RemoveContainer" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.796371 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} err="failed to get container status \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": rpc error: code = NotFound desc = could not find container \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": container with ID starting with f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.796398 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.796614 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} err="failed to get container status \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": rpc error: code = NotFound desc = could not find container \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": container with ID starting with e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.796635 4754 scope.go:117] "RemoveContainer" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797043 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} err="failed to get container status \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": rpc error: code = NotFound desc = could not find container \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": container with ID starting with cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797065 4754 scope.go:117] "RemoveContainer" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797341 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} err="failed to get container status \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": rpc error: code = NotFound desc = could not find container \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": container with ID starting with ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797362 4754 scope.go:117] "RemoveContainer" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797565 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} err="failed to get container status \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": rpc error: code = NotFound desc = could not find container \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": container with ID starting with dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797581 4754 scope.go:117] "RemoveContainer" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797790 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} err="failed to get container status \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": rpc error: code = NotFound desc = could not find container \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": container with ID starting with 6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.797812 4754 scope.go:117] "RemoveContainer" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.798018 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} err="failed to get container status \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": rpc error: code = NotFound desc = could not find container \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": container with ID starting with 2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.798040 4754 scope.go:117] "RemoveContainer" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.798408 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} err="failed to get container status \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": rpc error: code = NotFound desc = could not find container \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": container with ID starting with b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.798467 4754 scope.go:117] "RemoveContainer" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.798907 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} err="failed to get container status \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": rpc error: code = NotFound desc = could not find container \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": container with ID starting with 9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.798969 4754 scope.go:117] "RemoveContainer" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.799299 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} err="failed to get container status \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": rpc error: code = NotFound desc = could not find container \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": container with ID starting with 4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.799329 4754 scope.go:117] "RemoveContainer" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.799606 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} err="failed to get container status \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": rpc error: code = NotFound desc = could not find container \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": container with ID starting with f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.799629 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.799849 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} err="failed to get container status \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": rpc error: code = NotFound desc = could not find container \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": container with ID starting with e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.799872 4754 scope.go:117] "RemoveContainer" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800068 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} err="failed to get container status \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": rpc error: code = NotFound desc = could not find container \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": container with ID starting with cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800088 4754 scope.go:117] "RemoveContainer" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800375 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} err="failed to get container status \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": rpc error: code = NotFound desc = could not find container \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": container with ID starting with ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800397 4754 scope.go:117] "RemoveContainer" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800599 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} err="failed to get container status \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": rpc error: code = NotFound desc = could not find container \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": container with ID starting with dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800618 4754 scope.go:117] "RemoveContainer" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800944 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} err="failed to get container status \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": rpc error: code = NotFound desc = could not find container \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": container with ID starting with 6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.800986 4754 scope.go:117] "RemoveContainer" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.801409 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} err="failed to get container status \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": rpc error: code = NotFound desc = could not find container \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": container with ID starting with 2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.801450 4754 scope.go:117] "RemoveContainer" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.801908 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} err="failed to get container status \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": rpc error: code = NotFound desc = could not find container \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": container with ID starting with b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.801935 4754 scope.go:117] "RemoveContainer" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.802707 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} err="failed to get container status \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": rpc error: code = NotFound desc = could not find container \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": container with ID starting with 9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.802731 4754 scope.go:117] "RemoveContainer" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.803287 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} err="failed to get container status \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": rpc error: code = NotFound desc = could not find container \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": container with ID starting with 4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.803323 4754 scope.go:117] "RemoveContainer" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.803721 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} err="failed to get container status \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": rpc error: code = NotFound desc = could not find container \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": container with ID starting with f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.803747 4754 scope.go:117] "RemoveContainer" containerID="e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.804069 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810"} err="failed to get container status \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": rpc error: code = NotFound desc = could not find container \"e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810\": container with ID starting with e425d591c454aaf7779c98a7a457194fe7dce93b38f5122ba5fd4ce61e144810 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.804088 4754 scope.go:117] "RemoveContainer" containerID="cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.804566 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2"} err="failed to get container status \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": rpc error: code = NotFound desc = could not find container \"cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2\": container with ID starting with cff9313e2673d0759ef9fc9654f040086abb58f5ac9bcb9b955ce4d91e93afd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.804586 4754 scope.go:117] "RemoveContainer" containerID="ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.804889 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4"} err="failed to get container status \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": rpc error: code = NotFound desc = could not find container \"ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4\": container with ID starting with ba42b95b13b4bd7d29c167fb5077aeb5434eaf74c2f7d4faa9f0a3f94d4bc8d4 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.804909 4754 scope.go:117] "RemoveContainer" containerID="dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.805500 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f"} err="failed to get container status \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": rpc error: code = NotFound desc = could not find container \"dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f\": container with ID starting with dccead0a3dfb9e73751b8aefc3c18a1a5496b75b7a4518ced969a05503b1135f not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.805531 4754 scope.go:117] "RemoveContainer" containerID="6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.805951 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168"} err="failed to get container status \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": rpc error: code = NotFound desc = could not find container \"6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168\": container with ID starting with 6969360aee9b4da4bde27cc79ef422550ffd23df36edb3e4c1884bec0dbeb168 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.805984 4754 scope.go:117] "RemoveContainer" containerID="2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.806397 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d"} err="failed to get container status \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": rpc error: code = NotFound desc = could not find container \"2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d\": container with ID starting with 2057e987bfda7e10c5b75bfa2baec4996cb397bc73baef87cf913e7ffd870e7d not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.806432 4754 scope.go:117] "RemoveContainer" containerID="b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.806776 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed"} err="failed to get container status \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": rpc error: code = NotFound desc = could not find container \"b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed\": container with ID starting with b39b9bd008ffc960e7889a01a59ff5a2cb282be83514c22a125a31ff38c84aed not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.806805 4754 scope.go:117] "RemoveContainer" containerID="9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.807262 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2"} err="failed to get container status \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": rpc error: code = NotFound desc = could not find container \"9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2\": container with ID starting with 9100dfdd9f6c82e9b42cb02d9c208625e96432d39be3f441c785e74b475aedd2 not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.807291 4754 scope.go:117] "RemoveContainer" containerID="4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.807765 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b"} err="failed to get container status \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": rpc error: code = NotFound desc = could not find container \"4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b\": container with ID starting with 4d809f66b03a2511a687cd39a8df81e123fd214718058d27ca790886d7092b8b not found: ID does not exist" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.807796 4754 scope.go:117] "RemoveContainer" containerID="f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211" Feb 18 19:29:35 crc kubenswrapper[4754]: I0218 19:29:35.808184 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211"} err="failed to get container status \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": rpc error: code = NotFound desc = could not find container \"f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211\": container with ID starting with f32ffa1769bdc940e6ff98b8d3be81b6345e0408719161af1d44b8c716661211 not found: ID does not exist" Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.219212 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e5683f-ada7-4578-a6e3-6f0dd72dd149" path="/var/lib/kubelet/pods/82e5683f-ada7-4578-a6e3-6f0dd72dd149/volumes" Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.503454 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"f7d0e5be4459a8a2b63e30dbbaa804ad606d4a8d798e5a0e0fa760c3240d3f8b"} Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.503832 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"b2cd4c8b5497f9b53f70a0d419b8dd4237ab5cc7431fc999c400320c2ec22990"} Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.503849 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"735c677022cde6e2ac8168983a108b7ac73caad7546979795aa26a4f7279b1e0"} Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.503862 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"81aeca8498d6f667a11a1876971def5f06a8963807846f6060080cb66162cabd"} Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.503875 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"b3f90945b9565c074f54182e9fe0c5a1e0cc058b5e6053cc06da332005390603"} Feb 18 19:29:36 crc kubenswrapper[4754]: I0218 19:29:36.503891 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"8fd0ba6f8640afa28fc72c71a315f212897e5b4a53c3b50815074f9185fb12a5"} Feb 18 19:29:39 crc kubenswrapper[4754]: I0218 19:29:39.550873 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"6de040c5b685e5e68f734d73cc9c8db3e7486ee5abccc04072cac960f6200611"} Feb 18 19:29:41 crc kubenswrapper[4754]: I0218 19:29:41.567597 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" event={"ID":"36336a87-7152-4256-8325-1f1b9a78ff60","Type":"ContainerStarted","Data":"9cd4935f445a5d1c8e7fe6526a4d576d5523330a8e22b4438604c5606b6e0e1d"} Feb 18 19:29:41 crc kubenswrapper[4754]: I0218 19:29:41.568077 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:41 crc kubenswrapper[4754]: I0218 19:29:41.568206 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:41 crc kubenswrapper[4754]: I0218 19:29:41.604121 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:41 crc kubenswrapper[4754]: I0218 19:29:41.605420 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" podStartSLOduration=7.605396722 podStartE2EDuration="7.605396722s" podCreationTimestamp="2026-02-18 19:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:29:41.600432034 +0000 UTC m=+684.050844830" watchObservedRunningTime="2026-02-18 19:29:41.605396722 +0000 UTC m=+684.055809518" Feb 18 19:29:42 crc kubenswrapper[4754]: I0218 19:29:42.594035 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:42 crc kubenswrapper[4754]: I0218 19:29:42.627848 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:29:46 crc kubenswrapper[4754]: I0218 19:29:46.210260 4754 scope.go:117] "RemoveContainer" containerID="fa5805441467198b1c86089bf816b3cb2a9e7b35ed917649659cc4f52c6e1b00" Feb 18 19:29:46 crc kubenswrapper[4754]: E0218 19:29:46.210929 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-pp2q2_openshift-multus(55244610-cf2e-4b72-b8b7-9d55898fbb62)\"" pod="openshift-multus/multus-pp2q2" podUID="55244610-cf2e-4b72-b8b7-9d55898fbb62" Feb 18 19:29:58 crc kubenswrapper[4754]: I0218 19:29:58.213318 4754 scope.go:117] "RemoveContainer" containerID="fa5805441467198b1c86089bf816b3cb2a9e7b35ed917649659cc4f52c6e1b00" Feb 18 19:29:58 crc kubenswrapper[4754]: I0218 19:29:58.692817 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pp2q2_55244610-cf2e-4b72-b8b7-9d55898fbb62/kube-multus/2.log" Feb 18 19:29:58 crc kubenswrapper[4754]: I0218 19:29:58.692885 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pp2q2" event={"ID":"55244610-cf2e-4b72-b8b7-9d55898fbb62","Type":"ContainerStarted","Data":"a20d67b2490d3efa2db5cb16780b0b96924705576994e08a4919e9564b6f4176"} Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.183532 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6"] Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.184957 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.191219 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.192011 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.204269 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6"] Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.250513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f749c4-d445-4f49-a8da-48cf3519f172-secret-volume\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.250575 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmz42\" (UniqueName: \"kubernetes.io/projected/03f749c4-d445-4f49-a8da-48cf3519f172-kube-api-access-dmz42\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.250638 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f749c4-d445-4f49-a8da-48cf3519f172-config-volume\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.351800 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f749c4-d445-4f49-a8da-48cf3519f172-config-volume\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.351892 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f749c4-d445-4f49-a8da-48cf3519f172-secret-volume\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.351918 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmz42\" (UniqueName: \"kubernetes.io/projected/03f749c4-d445-4f49-a8da-48cf3519f172-kube-api-access-dmz42\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.352922 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f749c4-d445-4f49-a8da-48cf3519f172-config-volume\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.360285 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f749c4-d445-4f49-a8da-48cf3519f172-secret-volume\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.372968 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmz42\" (UniqueName: \"kubernetes.io/projected/03f749c4-d445-4f49-a8da-48cf3519f172-kube-api-access-dmz42\") pod \"collect-profiles-29524050-tsfz6\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.500886 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:00 crc kubenswrapper[4754]: I0218 19:30:00.980820 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6"] Feb 18 19:30:00 crc kubenswrapper[4754]: W0218 19:30:00.989810 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03f749c4_d445_4f49_a8da_48cf3519f172.slice/crio-6ad8b63b1abcc3000d31d65df9b8276905c6feca619123c4b95e4e89ee02ce97 WatchSource:0}: Error finding container 6ad8b63b1abcc3000d31d65df9b8276905c6feca619123c4b95e4e89ee02ce97: Status 404 returned error can't find the container with id 6ad8b63b1abcc3000d31d65df9b8276905c6feca619123c4b95e4e89ee02ce97 Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.728194 4754 generic.go:334] "Generic (PLEG): container finished" podID="03f749c4-d445-4f49-a8da-48cf3519f172" containerID="1d5c8301a28d5b184f3e221e359fab418cbb3c0d6c7e6bceffe4b6910caa3077" exitCode=0 Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.728262 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" event={"ID":"03f749c4-d445-4f49-a8da-48cf3519f172","Type":"ContainerDied","Data":"1d5c8301a28d5b184f3e221e359fab418cbb3c0d6c7e6bceffe4b6910caa3077"} Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.728648 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" event={"ID":"03f749c4-d445-4f49-a8da-48cf3519f172","Type":"ContainerStarted","Data":"6ad8b63b1abcc3000d31d65df9b8276905c6feca619123c4b95e4e89ee02ce97"} Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.909562 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w"] Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.911354 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.913711 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:30:01 crc kubenswrapper[4754]: I0218 19:30:01.926094 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w"] Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.072697 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hfgx\" (UniqueName: \"kubernetes.io/projected/3899abe2-5fc8-4305-8101-7785882939ea-kube-api-access-4hfgx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.072875 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.072909 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.174090 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.174184 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.174216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hfgx\" (UniqueName: \"kubernetes.io/projected/3899abe2-5fc8-4305-8101-7785882939ea-kube-api-access-4hfgx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.174755 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.174758 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.197492 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hfgx\" (UniqueName: \"kubernetes.io/projected/3899abe2-5fc8-4305-8101-7785882939ea-kube-api-access-4hfgx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.227520 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.436362 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w"] Feb 18 19:30:02 crc kubenswrapper[4754]: W0218 19:30:02.443930 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3899abe2_5fc8_4305_8101_7785882939ea.slice/crio-0f0d4277f1d616d24acc1fea4a08328f9cf2898175377231856842ac1cfdf1c5 WatchSource:0}: Error finding container 0f0d4277f1d616d24acc1fea4a08328f9cf2898175377231856842ac1cfdf1c5: Status 404 returned error can't find the container with id 0f0d4277f1d616d24acc1fea4a08328f9cf2898175377231856842ac1cfdf1c5 Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.735504 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" event={"ID":"3899abe2-5fc8-4305-8101-7785882939ea","Type":"ContainerStarted","Data":"dbf7fc0f071bdfaac54025193f53944fb99cfc83b132e52f9428438d6c388a14"} Feb 18 19:30:02 crc kubenswrapper[4754]: I0218 19:30:02.735560 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" event={"ID":"3899abe2-5fc8-4305-8101-7785882939ea","Type":"ContainerStarted","Data":"0f0d4277f1d616d24acc1fea4a08328f9cf2898175377231856842ac1cfdf1c5"} Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.029073 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.187694 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmz42\" (UniqueName: \"kubernetes.io/projected/03f749c4-d445-4f49-a8da-48cf3519f172-kube-api-access-dmz42\") pod \"03f749c4-d445-4f49-a8da-48cf3519f172\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.187772 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f749c4-d445-4f49-a8da-48cf3519f172-secret-volume\") pod \"03f749c4-d445-4f49-a8da-48cf3519f172\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.187875 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f749c4-d445-4f49-a8da-48cf3519f172-config-volume\") pod \"03f749c4-d445-4f49-a8da-48cf3519f172\" (UID: \"03f749c4-d445-4f49-a8da-48cf3519f172\") " Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.189109 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f749c4-d445-4f49-a8da-48cf3519f172-config-volume" (OuterVolumeSpecName: "config-volume") pod "03f749c4-d445-4f49-a8da-48cf3519f172" (UID: "03f749c4-d445-4f49-a8da-48cf3519f172"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.195858 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f749c4-d445-4f49-a8da-48cf3519f172-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "03f749c4-d445-4f49-a8da-48cf3519f172" (UID: "03f749c4-d445-4f49-a8da-48cf3519f172"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.197356 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f749c4-d445-4f49-a8da-48cf3519f172-kube-api-access-dmz42" (OuterVolumeSpecName: "kube-api-access-dmz42") pod "03f749c4-d445-4f49-a8da-48cf3519f172" (UID: "03f749c4-d445-4f49-a8da-48cf3519f172"). InnerVolumeSpecName "kube-api-access-dmz42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.289712 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03f749c4-d445-4f49-a8da-48cf3519f172-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.289765 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f749c4-d445-4f49-a8da-48cf3519f172-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.289786 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmz42\" (UniqueName: \"kubernetes.io/projected/03f749c4-d445-4f49-a8da-48cf3519f172-kube-api-access-dmz42\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.743600 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" event={"ID":"03f749c4-d445-4f49-a8da-48cf3519f172","Type":"ContainerDied","Data":"6ad8b63b1abcc3000d31d65df9b8276905c6feca619123c4b95e4e89ee02ce97"} Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.743691 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad8b63b1abcc3000d31d65df9b8276905c6feca619123c4b95e4e89ee02ce97" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.743629 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6" Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.745712 4754 generic.go:334] "Generic (PLEG): container finished" podID="3899abe2-5fc8-4305-8101-7785882939ea" containerID="dbf7fc0f071bdfaac54025193f53944fb99cfc83b132e52f9428438d6c388a14" exitCode=0 Feb 18 19:30:03 crc kubenswrapper[4754]: I0218 19:30:03.745755 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" event={"ID":"3899abe2-5fc8-4305-8101-7785882939ea","Type":"ContainerDied","Data":"dbf7fc0f071bdfaac54025193f53944fb99cfc83b132e52f9428438d6c388a14"} Feb 18 19:30:05 crc kubenswrapper[4754]: I0218 19:30:05.110777 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nx9db" Feb 18 19:30:05 crc kubenswrapper[4754]: I0218 19:30:05.758490 4754 generic.go:334] "Generic (PLEG): container finished" podID="3899abe2-5fc8-4305-8101-7785882939ea" containerID="d94401a84e8b4ab2b6ff1f629b6b3097b683f5ad00dff2842614342656080472" exitCode=0 Feb 18 19:30:05 crc kubenswrapper[4754]: I0218 19:30:05.758550 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" event={"ID":"3899abe2-5fc8-4305-8101-7785882939ea","Type":"ContainerDied","Data":"d94401a84e8b4ab2b6ff1f629b6b3097b683f5ad00dff2842614342656080472"} Feb 18 19:30:06 crc kubenswrapper[4754]: I0218 19:30:06.770919 4754 generic.go:334] "Generic (PLEG): container finished" podID="3899abe2-5fc8-4305-8101-7785882939ea" containerID="671c2480a7bf75a29b0d41f5d9335bb014fc681ae0fc3729f5722fca1597c2a7" exitCode=0 Feb 18 19:30:06 crc kubenswrapper[4754]: I0218 19:30:06.771019 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" event={"ID":"3899abe2-5fc8-4305-8101-7785882939ea","Type":"ContainerDied","Data":"671c2480a7bf75a29b0d41f5d9335bb014fc681ae0fc3729f5722fca1597c2a7"} Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.045383 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.096519 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.096586 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.163287 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hfgx\" (UniqueName: \"kubernetes.io/projected/3899abe2-5fc8-4305-8101-7785882939ea-kube-api-access-4hfgx\") pod \"3899abe2-5fc8-4305-8101-7785882939ea\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.163447 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-util\") pod \"3899abe2-5fc8-4305-8101-7785882939ea\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.163542 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-bundle\") pod \"3899abe2-5fc8-4305-8101-7785882939ea\" (UID: \"3899abe2-5fc8-4305-8101-7785882939ea\") " Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.166767 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-bundle" (OuterVolumeSpecName: "bundle") pod "3899abe2-5fc8-4305-8101-7785882939ea" (UID: "3899abe2-5fc8-4305-8101-7785882939ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.174748 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3899abe2-5fc8-4305-8101-7785882939ea-kube-api-access-4hfgx" (OuterVolumeSpecName: "kube-api-access-4hfgx") pod "3899abe2-5fc8-4305-8101-7785882939ea" (UID: "3899abe2-5fc8-4305-8101-7785882939ea"). InnerVolumeSpecName "kube-api-access-4hfgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.265254 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hfgx\" (UniqueName: \"kubernetes.io/projected/3899abe2-5fc8-4305-8101-7785882939ea-kube-api-access-4hfgx\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.265768 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.524366 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-util" (OuterVolumeSpecName: "util") pod "3899abe2-5fc8-4305-8101-7785882939ea" (UID: "3899abe2-5fc8-4305-8101-7785882939ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.570832 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3899abe2-5fc8-4305-8101-7785882939ea-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.785200 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" event={"ID":"3899abe2-5fc8-4305-8101-7785882939ea","Type":"ContainerDied","Data":"0f0d4277f1d616d24acc1fea4a08328f9cf2898175377231856842ac1cfdf1c5"} Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.785250 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0d4277f1d616d24acc1fea4a08328f9cf2898175377231856842ac1cfdf1c5" Feb 18 19:30:08 crc kubenswrapper[4754]: I0218 19:30:08.785321 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mkz6w" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.963311 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-95plb"] Feb 18 19:30:18 crc kubenswrapper[4754]: E0218 19:30:18.964450 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f749c4-d445-4f49-a8da-48cf3519f172" containerName="collect-profiles" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.964468 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f749c4-d445-4f49-a8da-48cf3519f172" containerName="collect-profiles" Feb 18 19:30:18 crc kubenswrapper[4754]: E0218 19:30:18.964485 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="extract" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.964493 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="extract" Feb 18 19:30:18 crc kubenswrapper[4754]: E0218 19:30:18.964503 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="pull" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.964513 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="pull" Feb 18 19:30:18 crc kubenswrapper[4754]: E0218 19:30:18.964531 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="util" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.964541 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="util" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.964655 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3899abe2-5fc8-4305-8101-7785882939ea" containerName="extract" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.964669 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f749c4-d445-4f49-a8da-48cf3519f172" containerName="collect-profiles" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.965234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.967726 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.976419 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-95plb"] Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.978330 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 19:30:18 crc kubenswrapper[4754]: I0218 19:30:18.978637 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-fknwj" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.101838 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.103187 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.106020 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.106091 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-xkhfc" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.122706 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.123734 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.127888 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.142780 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.143351 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpb4\" (UniqueName: \"kubernetes.io/projected/f9f8719d-0708-4ffb-8e09-b0e34e2d3c55-kube-api-access-wmpb4\") pod \"obo-prometheus-operator-68bc856cb9-95plb\" (UID: \"f9f8719d-0708-4ffb-8e09-b0e34e2d3c55\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.244986 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/557ab570-bd29-43a9-9efe-0bd8dba8bd54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7\" (UID: \"557ab570-bd29-43a9-9efe-0bd8dba8bd54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.245085 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce9c41cb-0406-41a5-ae1b-90af7f1df2a8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t\" (UID: \"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.245112 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/557ab570-bd29-43a9-9efe-0bd8dba8bd54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7\" (UID: \"557ab570-bd29-43a9-9efe-0bd8dba8bd54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.245136 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpb4\" (UniqueName: \"kubernetes.io/projected/f9f8719d-0708-4ffb-8e09-b0e34e2d3c55-kube-api-access-wmpb4\") pod \"obo-prometheus-operator-68bc856cb9-95plb\" (UID: \"f9f8719d-0708-4ffb-8e09-b0e34e2d3c55\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.245213 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce9c41cb-0406-41a5-ae1b-90af7f1df2a8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t\" (UID: \"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.273408 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpb4\" (UniqueName: \"kubernetes.io/projected/f9f8719d-0708-4ffb-8e09-b0e34e2d3c55-kube-api-access-wmpb4\") pod \"obo-prometheus-operator-68bc856cb9-95plb\" (UID: \"f9f8719d-0708-4ffb-8e09-b0e34e2d3c55\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.281928 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.331470 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d477x"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.332227 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.341053 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gjqw9" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.341246 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.348046 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/557ab570-bd29-43a9-9efe-0bd8dba8bd54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7\" (UID: \"557ab570-bd29-43a9-9efe-0bd8dba8bd54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.348109 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce9c41cb-0406-41a5-ae1b-90af7f1df2a8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t\" (UID: \"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.348134 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/557ab570-bd29-43a9-9efe-0bd8dba8bd54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7\" (UID: \"557ab570-bd29-43a9-9efe-0bd8dba8bd54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.348174 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce9c41cb-0406-41a5-ae1b-90af7f1df2a8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t\" (UID: \"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.348196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67vk\" (UniqueName: \"kubernetes.io/projected/def7adbf-347c-446b-948a-969f999ec34e-kube-api-access-v67vk\") pod \"observability-operator-59bdc8b94-d477x\" (UID: \"def7adbf-347c-446b-948a-969f999ec34e\") " pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.348217 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/def7adbf-347c-446b-948a-969f999ec34e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d477x\" (UID: \"def7adbf-347c-446b-948a-969f999ec34e\") " pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.349315 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d477x"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.352951 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/557ab570-bd29-43a9-9efe-0bd8dba8bd54-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7\" (UID: \"557ab570-bd29-43a9-9efe-0bd8dba8bd54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.354382 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/557ab570-bd29-43a9-9efe-0bd8dba8bd54-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7\" (UID: \"557ab570-bd29-43a9-9efe-0bd8dba8bd54\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.354799 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce9c41cb-0406-41a5-ae1b-90af7f1df2a8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t\" (UID: \"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.359736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce9c41cb-0406-41a5-ae1b-90af7f1df2a8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t\" (UID: \"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.418799 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.437109 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.458924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v67vk\" (UniqueName: \"kubernetes.io/projected/def7adbf-347c-446b-948a-969f999ec34e-kube-api-access-v67vk\") pod \"observability-operator-59bdc8b94-d477x\" (UID: \"def7adbf-347c-446b-948a-969f999ec34e\") " pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.458981 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/def7adbf-347c-446b-948a-969f999ec34e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d477x\" (UID: \"def7adbf-347c-446b-948a-969f999ec34e\") " pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.475302 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/def7adbf-347c-446b-948a-969f999ec34e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-d477x\" (UID: \"def7adbf-347c-446b-948a-969f999ec34e\") " pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.485733 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v67vk\" (UniqueName: \"kubernetes.io/projected/def7adbf-347c-446b-948a-969f999ec34e-kube-api-access-v67vk\") pod \"observability-operator-59bdc8b94-d477x\" (UID: \"def7adbf-347c-446b-948a-969f999ec34e\") " pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.517176 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ftfxk"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.517985 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.530780 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-mbnmb" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.537560 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ftfxk"] Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.616477 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-95plb"] Feb 18 19:30:19 crc kubenswrapper[4754]: W0218 19:30:19.619645 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f8719d_0708_4ffb_8e09_b0e34e2d3c55.slice/crio-3c39736edb0aabc47dd38702f9e80df4008dbd118f8c968aa226c4871d792b8a WatchSource:0}: Error finding container 3c39736edb0aabc47dd38702f9e80df4008dbd118f8c968aa226c4871d792b8a: Status 404 returned error can't find the container with id 3c39736edb0aabc47dd38702f9e80df4008dbd118f8c968aa226c4871d792b8a Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.661752 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fe2ea9e-f211-4040-a032-b913387550fc-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ftfxk\" (UID: \"9fe2ea9e-f211-4040-a032-b913387550fc\") " pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.661832 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr5t9\" (UniqueName: \"kubernetes.io/projected/9fe2ea9e-f211-4040-a032-b913387550fc-kube-api-access-dr5t9\") pod \"perses-operator-5bf474d74f-ftfxk\" (UID: \"9fe2ea9e-f211-4040-a032-b913387550fc\") " pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.688935 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.769193 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr5t9\" (UniqueName: \"kubernetes.io/projected/9fe2ea9e-f211-4040-a032-b913387550fc-kube-api-access-dr5t9\") pod \"perses-operator-5bf474d74f-ftfxk\" (UID: \"9fe2ea9e-f211-4040-a032-b913387550fc\") " pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.769276 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fe2ea9e-f211-4040-a032-b913387550fc-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ftfxk\" (UID: \"9fe2ea9e-f211-4040-a032-b913387550fc\") " pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.770582 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9fe2ea9e-f211-4040-a032-b913387550fc-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ftfxk\" (UID: \"9fe2ea9e-f211-4040-a032-b913387550fc\") " pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.792011 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr5t9\" (UniqueName: \"kubernetes.io/projected/9fe2ea9e-f211-4040-a032-b913387550fc-kube-api-access-dr5t9\") pod \"perses-operator-5bf474d74f-ftfxk\" (UID: \"9fe2ea9e-f211-4040-a032-b913387550fc\") " pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.821204 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t"] Feb 18 19:30:19 crc kubenswrapper[4754]: W0218 19:30:19.826268 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9c41cb_0406_41a5_ae1b_90af7f1df2a8.slice/crio-5e250978e489da45bc8f109b8fa4bc15424dbba6ad5764dcdc0588ffcbfafbce WatchSource:0}: Error finding container 5e250978e489da45bc8f109b8fa4bc15424dbba6ad5764dcdc0588ffcbfafbce: Status 404 returned error can't find the container with id 5e250978e489da45bc8f109b8fa4bc15424dbba6ad5764dcdc0588ffcbfafbce Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.837378 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7"] Feb 18 19:30:19 crc kubenswrapper[4754]: W0218 19:30:19.848100 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557ab570_bd29_43a9_9efe_0bd8dba8bd54.slice/crio-4ca8627bbc1664d238c4115b4b3b7ff4bfc5c0601ee5a119b64b86355d6f0ffb WatchSource:0}: Error finding container 4ca8627bbc1664d238c4115b4b3b7ff4bfc5c0601ee5a119b64b86355d6f0ffb: Status 404 returned error can't find the container with id 4ca8627bbc1664d238c4115b4b3b7ff4bfc5c0601ee5a119b64b86355d6f0ffb Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.857730 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" event={"ID":"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8","Type":"ContainerStarted","Data":"5e250978e489da45bc8f109b8fa4bc15424dbba6ad5764dcdc0588ffcbfafbce"} Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.868607 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.869320 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" event={"ID":"f9f8719d-0708-4ffb-8e09-b0e34e2d3c55","Type":"ContainerStarted","Data":"3c39736edb0aabc47dd38702f9e80df4008dbd118f8c968aa226c4871d792b8a"} Feb 18 19:30:19 crc kubenswrapper[4754]: I0218 19:30:19.967956 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-d477x"] Feb 18 19:30:19 crc kubenswrapper[4754]: W0218 19:30:19.987922 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef7adbf_347c_446b_948a_969f999ec34e.slice/crio-6342763ecae59fc4d35570c537e3fbc1c68032e4e231288920982a5b5a622cb6 WatchSource:0}: Error finding container 6342763ecae59fc4d35570c537e3fbc1c68032e4e231288920982a5b5a622cb6: Status 404 returned error can't find the container with id 6342763ecae59fc4d35570c537e3fbc1c68032e4e231288920982a5b5a622cb6 Feb 18 19:30:20 crc kubenswrapper[4754]: I0218 19:30:20.098883 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ftfxk"] Feb 18 19:30:20 crc kubenswrapper[4754]: I0218 19:30:20.877846 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" event={"ID":"557ab570-bd29-43a9-9efe-0bd8dba8bd54","Type":"ContainerStarted","Data":"4ca8627bbc1664d238c4115b4b3b7ff4bfc5c0601ee5a119b64b86355d6f0ffb"} Feb 18 19:30:20 crc kubenswrapper[4754]: I0218 19:30:20.880097 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" event={"ID":"9fe2ea9e-f211-4040-a032-b913387550fc","Type":"ContainerStarted","Data":"80c78192d43c20c9ed8321a62c534721154ac79a75d02e862a02f8fbc89386d5"} Feb 18 19:30:20 crc kubenswrapper[4754]: I0218 19:30:20.885381 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-d477x" event={"ID":"def7adbf-347c-446b-948a-969f999ec34e","Type":"ContainerStarted","Data":"6342763ecae59fc4d35570c537e3fbc1c68032e4e231288920982a5b5a622cb6"} Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.019988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" event={"ID":"f9f8719d-0708-4ffb-8e09-b0e34e2d3c55","Type":"ContainerStarted","Data":"b720e524d748b2797f50eaa6e3b15c5d9b0ec503c8f5d11f8c8fdc4e1bd88b46"} Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.029456 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" event={"ID":"557ab570-bd29-43a9-9efe-0bd8dba8bd54","Type":"ContainerStarted","Data":"aa3a5e6686b733b6c03c200403b56cf975d9e5df6c168d4cabb8b957c19c3a33"} Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.032766 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" event={"ID":"9fe2ea9e-f211-4040-a032-b913387550fc","Type":"ContainerStarted","Data":"6a77205837f912b8a93a86033adf403ff320b724413dfaca17383969a6738fa7"} Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.033438 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.035255 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-d477x" event={"ID":"def7adbf-347c-446b-948a-969f999ec34e","Type":"ContainerStarted","Data":"7434f8559cc8e811a22cd027c04b671bf5de6c83a55267e84de29bafefb02c7d"} Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.037443 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.037538 4754 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-d477x container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.29:8081/healthz\": dial tcp 10.217.0.29:8081: connect: connection refused" start-of-body= Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.037722 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-d477x" podUID="def7adbf-347c-446b-948a-969f999ec34e" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.29:8081/healthz\": dial tcp 10.217.0.29:8081: connect: connection refused" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.040743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" event={"ID":"ce9c41cb-0406-41a5-ae1b-90af7f1df2a8","Type":"ContainerStarted","Data":"2eb794bafba9660c98019bf712d627fceaac91bbc02bdd6da2c187a2291f9339"} Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.075759 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-95plb" podStartSLOduration=2.125098561 podStartE2EDuration="15.075742515s" podCreationTimestamp="2026-02-18 19:30:18 +0000 UTC" firstStartedPulling="2026-02-18 19:30:19.622432727 +0000 UTC m=+722.072845523" lastFinishedPulling="2026-02-18 19:30:32.573076681 +0000 UTC m=+735.023489477" observedRunningTime="2026-02-18 19:30:33.073077181 +0000 UTC m=+735.523489977" watchObservedRunningTime="2026-02-18 19:30:33.075742515 +0000 UTC m=+735.526155311" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.148169 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-hdg6t" podStartSLOduration=1.368017172 podStartE2EDuration="14.148131019s" podCreationTimestamp="2026-02-18 19:30:19 +0000 UTC" firstStartedPulling="2026-02-18 19:30:19.828867062 +0000 UTC m=+722.279279858" lastFinishedPulling="2026-02-18 19:30:32.608980909 +0000 UTC m=+735.059393705" observedRunningTime="2026-02-18 19:30:33.132956663 +0000 UTC m=+735.583369449" watchObservedRunningTime="2026-02-18 19:30:33.148131019 +0000 UTC m=+735.598543805" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.166874 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" podStartSLOduration=1.665322915 podStartE2EDuration="14.166828267s" podCreationTimestamp="2026-02-18 19:30:19 +0000 UTC" firstStartedPulling="2026-02-18 19:30:20.109198271 +0000 UTC m=+722.559611067" lastFinishedPulling="2026-02-18 19:30:32.610703623 +0000 UTC m=+735.061116419" observedRunningTime="2026-02-18 19:30:33.160309832 +0000 UTC m=+735.610722628" watchObservedRunningTime="2026-02-18 19:30:33.166828267 +0000 UTC m=+735.617241063" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.202253 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-d477x" podStartSLOduration=1.59707094 podStartE2EDuration="14.202216969s" podCreationTimestamp="2026-02-18 19:30:19 +0000 UTC" firstStartedPulling="2026-02-18 19:30:20.005194923 +0000 UTC m=+722.455607719" lastFinishedPulling="2026-02-18 19:30:32.610340952 +0000 UTC m=+735.060753748" observedRunningTime="2026-02-18 19:30:33.198173522 +0000 UTC m=+735.648586318" watchObservedRunningTime="2026-02-18 19:30:33.202216969 +0000 UTC m=+735.652629765" Feb 18 19:30:33 crc kubenswrapper[4754]: I0218 19:30:33.218592 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5d6d4f4488-kcfm7" podStartSLOduration=1.459265538 podStartE2EDuration="14.218565492s" podCreationTimestamp="2026-02-18 19:30:19 +0000 UTC" firstStartedPulling="2026-02-18 19:30:19.850698898 +0000 UTC m=+722.301111704" lastFinishedPulling="2026-02-18 19:30:32.609998862 +0000 UTC m=+735.060411658" observedRunningTime="2026-02-18 19:30:33.217257952 +0000 UTC m=+735.667670758" watchObservedRunningTime="2026-02-18 19:30:33.218565492 +0000 UTC m=+735.668978288" Feb 18 19:30:34 crc kubenswrapper[4754]: I0218 19:30:34.080024 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-d477x" Feb 18 19:30:38 crc kubenswrapper[4754]: I0218 19:30:38.096842 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:30:38 crc kubenswrapper[4754]: I0218 19:30:38.097308 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:30:39 crc kubenswrapper[4754]: I0218 19:30:39.871296 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ftfxk" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.106671 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8"] Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.108627 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.110794 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.115888 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8"] Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.204206 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5m7j\" (UniqueName: \"kubernetes.io/projected/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-kube-api-access-c5m7j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.204336 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.204419 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.305294 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.305384 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.305470 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5m7j\" (UniqueName: \"kubernetes.io/projected/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-kube-api-access-c5m7j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.306029 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.306077 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.325041 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5m7j\" (UniqueName: \"kubernetes.io/projected/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-kube-api-access-c5m7j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.425012 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:30:59 crc kubenswrapper[4754]: I0218 19:30:59.717558 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8"] Feb 18 19:31:00 crc kubenswrapper[4754]: I0218 19:31:00.711775 4754 generic.go:334] "Generic (PLEG): container finished" podID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerID="9920b21c90e3679d3b60dd6c8d365a26490ba6e928981c5bc82be8188cdb7504" exitCode=0 Feb 18 19:31:00 crc kubenswrapper[4754]: I0218 19:31:00.712227 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" event={"ID":"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832","Type":"ContainerDied","Data":"9920b21c90e3679d3b60dd6c8d365a26490ba6e928981c5bc82be8188cdb7504"} Feb 18 19:31:00 crc kubenswrapper[4754]: I0218 19:31:00.712318 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" event={"ID":"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832","Type":"ContainerStarted","Data":"7814a9cedc58d707d0aaf47055ec9d833724d039bc9a3b62f692362c819f4a83"} Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.446690 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lnx98"] Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.448127 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.461584 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnx98"] Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.548956 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-catalog-content\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.549020 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-utilities\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.549050 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcvg\" (UniqueName: \"kubernetes.io/projected/9616cda5-1936-4a98-83cb-0281199e6126-kube-api-access-dzcvg\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.650683 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-catalog-content\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.650742 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-utilities\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.650767 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcvg\" (UniqueName: \"kubernetes.io/projected/9616cda5-1936-4a98-83cb-0281199e6126-kube-api-access-dzcvg\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.651501 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-catalog-content\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.651514 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-utilities\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.673841 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcvg\" (UniqueName: \"kubernetes.io/projected/9616cda5-1936-4a98-83cb-0281199e6126-kube-api-access-dzcvg\") pod \"redhat-operators-lnx98\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:01 crc kubenswrapper[4754]: I0218 19:31:01.770082 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:02 crc kubenswrapper[4754]: I0218 19:31:02.083222 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnx98"] Feb 18 19:31:02 crc kubenswrapper[4754]: I0218 19:31:02.725068 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnx98" event={"ID":"9616cda5-1936-4a98-83cb-0281199e6126","Type":"ContainerStarted","Data":"1aee808bdac558bdd6572a32f32d7912849ed10eb3dd00aa2f914aa4e7afde0a"} Feb 18 19:31:04 crc kubenswrapper[4754]: I0218 19:31:04.509337 4754 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:31:04 crc kubenswrapper[4754]: I0218 19:31:04.750844 4754 generic.go:334] "Generic (PLEG): container finished" podID="9616cda5-1936-4a98-83cb-0281199e6126" containerID="44ae8aab4eb7a1ca142b951c0d0b7509d9f33edf3591960a2843461b92da929b" exitCode=0 Feb 18 19:31:04 crc kubenswrapper[4754]: I0218 19:31:04.750965 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnx98" event={"ID":"9616cda5-1936-4a98-83cb-0281199e6126","Type":"ContainerDied","Data":"44ae8aab4eb7a1ca142b951c0d0b7509d9f33edf3591960a2843461b92da929b"} Feb 18 19:31:05 crc kubenswrapper[4754]: I0218 19:31:05.758307 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" event={"ID":"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832","Type":"ContainerStarted","Data":"d54d0317902029342a683ac185cd138502f8c3ceaaee714f53eca7149be287d4"} Feb 18 19:31:06 crc kubenswrapper[4754]: I0218 19:31:06.768322 4754 generic.go:334] "Generic (PLEG): container finished" podID="9616cda5-1936-4a98-83cb-0281199e6126" containerID="cbd0cab224f65eca3cbb7725a32493fc83f42bc3fd58bf735b5d2382c907fcf9" exitCode=0 Feb 18 19:31:06 crc kubenswrapper[4754]: I0218 19:31:06.768385 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnx98" event={"ID":"9616cda5-1936-4a98-83cb-0281199e6126","Type":"ContainerDied","Data":"cbd0cab224f65eca3cbb7725a32493fc83f42bc3fd58bf735b5d2382c907fcf9"} Feb 18 19:31:06 crc kubenswrapper[4754]: I0218 19:31:06.771337 4754 generic.go:334] "Generic (PLEG): container finished" podID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerID="d54d0317902029342a683ac185cd138502f8c3ceaaee714f53eca7149be287d4" exitCode=0 Feb 18 19:31:06 crc kubenswrapper[4754]: I0218 19:31:06.771411 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" event={"ID":"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832","Type":"ContainerDied","Data":"d54d0317902029342a683ac185cd138502f8c3ceaaee714f53eca7149be287d4"} Feb 18 19:31:07 crc kubenswrapper[4754]: I0218 19:31:07.778940 4754 generic.go:334] "Generic (PLEG): container finished" podID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerID="e81a8c16c4f4f7f690c1402ccbc8e0f7433947e38072e684369ec9ede1be28ec" exitCode=0 Feb 18 19:31:07 crc kubenswrapper[4754]: I0218 19:31:07.779001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" event={"ID":"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832","Type":"ContainerDied","Data":"e81a8c16c4f4f7f690c1402ccbc8e0f7433947e38072e684369ec9ede1be28ec"} Feb 18 19:31:07 crc kubenswrapper[4754]: I0218 19:31:07.783994 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnx98" event={"ID":"9616cda5-1936-4a98-83cb-0281199e6126","Type":"ContainerStarted","Data":"2c7c8fbf791d97190fdb4a70cd26d364ca2e717699349014c1310a81b26bb595"} Feb 18 19:31:07 crc kubenswrapper[4754]: I0218 19:31:07.831223 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lnx98" podStartSLOduration=4.587737415 podStartE2EDuration="6.831200096s" podCreationTimestamp="2026-02-18 19:31:01 +0000 UTC" firstStartedPulling="2026-02-18 19:31:04.940953783 +0000 UTC m=+767.391366579" lastFinishedPulling="2026-02-18 19:31:07.184416464 +0000 UTC m=+769.634829260" observedRunningTime="2026-02-18 19:31:07.824795894 +0000 UTC m=+770.275208710" watchObservedRunningTime="2026-02-18 19:31:07.831200096 +0000 UTC m=+770.281612902" Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.096550 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.096642 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.096721 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.097530 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e80755a090c368aaa1f52b9e1d9b61931048fa366f94c61a6b1fb41f5ef0c6f5"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.097586 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://e80755a090c368aaa1f52b9e1d9b61931048fa366f94c61a6b1fb41f5ef0c6f5" gracePeriod=600 Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.793263 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="e80755a090c368aaa1f52b9e1d9b61931048fa366f94c61a6b1fb41f5ef0c6f5" exitCode=0 Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.793406 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"e80755a090c368aaa1f52b9e1d9b61931048fa366f94c61a6b1fb41f5ef0c6f5"} Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.793851 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"0b71273a5a5eb671bd4925b19c78d15799283dc68e22f711f6ec374c23ac8c87"} Feb 18 19:31:08 crc kubenswrapper[4754]: I0218 19:31:08.793883 4754 scope.go:117] "RemoveContainer" containerID="d68869f95ccdc0f0ce1d0a9d01eddff3e864ac626357691b0e582cc755bc82f8" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.029791 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.168932 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-bundle\") pod \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.169030 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-util\") pod \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.169102 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5m7j\" (UniqueName: \"kubernetes.io/projected/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-kube-api-access-c5m7j\") pod \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\" (UID: \"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832\") " Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.169771 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-bundle" (OuterVolumeSpecName: "bundle") pod "c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" (UID: "c435f0d8-2bef-41d1-a6f7-1bf21f6c9832"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.182719 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-kube-api-access-c5m7j" (OuterVolumeSpecName: "kube-api-access-c5m7j") pod "c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" (UID: "c435f0d8-2bef-41d1-a6f7-1bf21f6c9832"). InnerVolumeSpecName "kube-api-access-c5m7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.183832 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-util" (OuterVolumeSpecName: "util") pod "c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" (UID: "c435f0d8-2bef-41d1-a6f7-1bf21f6c9832"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.270844 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5m7j\" (UniqueName: \"kubernetes.io/projected/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-kube-api-access-c5m7j\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.270911 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.270931 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c435f0d8-2bef-41d1-a6f7-1bf21f6c9832-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.809188 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" event={"ID":"c435f0d8-2bef-41d1-a6f7-1bf21f6c9832","Type":"ContainerDied","Data":"7814a9cedc58d707d0aaf47055ec9d833724d039bc9a3b62f692362c819f4a83"} Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.809247 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7814a9cedc58d707d0aaf47055ec9d833724d039bc9a3b62f692362c819f4a83" Feb 18 19:31:09 crc kubenswrapper[4754]: I0218 19:31:09.809349 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecavtpt8" Feb 18 19:31:11 crc kubenswrapper[4754]: I0218 19:31:11.770505 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:11 crc kubenswrapper[4754]: I0218 19:31:11.770896 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.062397 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9f8rs"] Feb 18 19:31:12 crc kubenswrapper[4754]: E0218 19:31:12.063077 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="extract" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.063093 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="extract" Feb 18 19:31:12 crc kubenswrapper[4754]: E0218 19:31:12.063107 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="util" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.063114 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="util" Feb 18 19:31:12 crc kubenswrapper[4754]: E0218 19:31:12.063130 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="pull" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.063139 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="pull" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.063285 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c435f0d8-2bef-41d1-a6f7-1bf21f6c9832" containerName="extract" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.063783 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.066992 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.067445 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.067576 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-6w2hv" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.083582 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9f8rs"] Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.210135 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xms\" (UniqueName: \"kubernetes.io/projected/b6f248e5-7ec5-443f-87c5-a8de52d8c4e8-kube-api-access-k5xms\") pod \"nmstate-operator-694c9596b7-9f8rs\" (UID: \"b6f248e5-7ec5-443f-87c5-a8de52d8c4e8\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.312425 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xms\" (UniqueName: \"kubernetes.io/projected/b6f248e5-7ec5-443f-87c5-a8de52d8c4e8-kube-api-access-k5xms\") pod \"nmstate-operator-694c9596b7-9f8rs\" (UID: \"b6f248e5-7ec5-443f-87c5-a8de52d8c4e8\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.334474 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xms\" (UniqueName: \"kubernetes.io/projected/b6f248e5-7ec5-443f-87c5-a8de52d8c4e8-kube-api-access-k5xms\") pod \"nmstate-operator-694c9596b7-9f8rs\" (UID: \"b6f248e5-7ec5-443f-87c5-a8de52d8c4e8\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.382474 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.664244 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-9f8rs"] Feb 18 19:31:12 crc kubenswrapper[4754]: W0218 19:31:12.671071 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6f248e5_7ec5_443f_87c5_a8de52d8c4e8.slice/crio-4d2604ea68b730d9c444701f53fa88eadd9a82d5ab05b022d54bbfb3812991da WatchSource:0}: Error finding container 4d2604ea68b730d9c444701f53fa88eadd9a82d5ab05b022d54bbfb3812991da: Status 404 returned error can't find the container with id 4d2604ea68b730d9c444701f53fa88eadd9a82d5ab05b022d54bbfb3812991da Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.808950 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lnx98" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="registry-server" probeResult="failure" output=< Feb 18 19:31:12 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:31:12 crc kubenswrapper[4754]: > Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.828605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" event={"ID":"b6f248e5-7ec5-443f-87c5-a8de52d8c4e8","Type":"ContainerStarted","Data":"4d2604ea68b730d9c444701f53fa88eadd9a82d5ab05b022d54bbfb3812991da"} Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.844209 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fd8cq"] Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.845818 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:12 crc kubenswrapper[4754]: I0218 19:31:12.857426 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd8cq"] Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.024638 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgm8n\" (UniqueName: \"kubernetes.io/projected/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-kube-api-access-rgm8n\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.024718 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-utilities\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.024771 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-catalog-content\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.127230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-utilities\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.126067 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-utilities\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.127388 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-catalog-content\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.127507 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgm8n\" (UniqueName: \"kubernetes.io/projected/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-kube-api-access-rgm8n\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.127883 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-catalog-content\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.162475 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgm8n\" (UniqueName: \"kubernetes.io/projected/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-kube-api-access-rgm8n\") pod \"community-operators-fd8cq\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.165062 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.474149 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd8cq"] Feb 18 19:31:13 crc kubenswrapper[4754]: W0218 19:31:13.481386 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18d62a42_bba1_4fc4_ae4e_d7495c0a1da8.slice/crio-87fe89f4a7e6d85e4a71adec90ce2d4f8894c513f68f8f3eb893329f0fe4c242 WatchSource:0}: Error finding container 87fe89f4a7e6d85e4a71adec90ce2d4f8894c513f68f8f3eb893329f0fe4c242: Status 404 returned error can't find the container with id 87fe89f4a7e6d85e4a71adec90ce2d4f8894c513f68f8f3eb893329f0fe4c242 Feb 18 19:31:13 crc kubenswrapper[4754]: I0218 19:31:13.836244 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd8cq" event={"ID":"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8","Type":"ContainerStarted","Data":"87fe89f4a7e6d85e4a71adec90ce2d4f8894c513f68f8f3eb893329f0fe4c242"} Feb 18 19:31:14 crc kubenswrapper[4754]: I0218 19:31:14.845657 4754 generic.go:334] "Generic (PLEG): container finished" podID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerID="5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d" exitCode=0 Feb 18 19:31:14 crc kubenswrapper[4754]: I0218 19:31:14.845717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd8cq" event={"ID":"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8","Type":"ContainerDied","Data":"5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d"} Feb 18 19:31:15 crc kubenswrapper[4754]: I0218 19:31:15.856096 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" event={"ID":"b6f248e5-7ec5-443f-87c5-a8de52d8c4e8","Type":"ContainerStarted","Data":"655eb03fa16d917f8e461bb3b1826cc7e05e1ad0c6cbdfb10671319e2c157e4f"} Feb 18 19:31:15 crc kubenswrapper[4754]: I0218 19:31:15.880980 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-9f8rs" podStartSLOduration=1.218838861 podStartE2EDuration="3.88095228s" podCreationTimestamp="2026-02-18 19:31:12 +0000 UTC" firstStartedPulling="2026-02-18 19:31:12.675551417 +0000 UTC m=+775.125964213" lastFinishedPulling="2026-02-18 19:31:15.337664846 +0000 UTC m=+777.788077632" observedRunningTime="2026-02-18 19:31:15.878675888 +0000 UTC m=+778.329088704" watchObservedRunningTime="2026-02-18 19:31:15.88095228 +0000 UTC m=+778.331365076" Feb 18 19:31:16 crc kubenswrapper[4754]: I0218 19:31:16.866746 4754 generic.go:334] "Generic (PLEG): container finished" podID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerID="17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4" exitCode=0 Feb 18 19:31:16 crc kubenswrapper[4754]: I0218 19:31:16.866893 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd8cq" event={"ID":"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8","Type":"ContainerDied","Data":"17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4"} Feb 18 19:31:17 crc kubenswrapper[4754]: I0218 19:31:17.876602 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd8cq" event={"ID":"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8","Type":"ContainerStarted","Data":"9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b"} Feb 18 19:31:17 crc kubenswrapper[4754]: I0218 19:31:17.896348 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fd8cq" podStartSLOduration=3.9236096050000002 podStartE2EDuration="5.896319526s" podCreationTimestamp="2026-02-18 19:31:12 +0000 UTC" firstStartedPulling="2026-02-18 19:31:15.287136762 +0000 UTC m=+777.737549568" lastFinishedPulling="2026-02-18 19:31:17.259846693 +0000 UTC m=+779.710259489" observedRunningTime="2026-02-18 19:31:17.893028983 +0000 UTC m=+780.343441869" watchObservedRunningTime="2026-02-18 19:31:17.896319526 +0000 UTC m=+780.346732322" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.257102 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.262249 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.268316 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ns52z" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.276656 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.309379 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.311424 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.313707 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.322616 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.324872 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v8tj2"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.325880 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.341475 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6pw\" (UniqueName: \"kubernetes.io/projected/48da33e9-5d62-48a7-b7ff-77aaab4b2d4d-kube-api-access-rg6pw\") pod \"nmstate-metrics-58c85c668d-zx5fj\" (UID: \"48da33e9-5d62-48a7-b7ff-77aaab4b2d4d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.397846 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.398925 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.401328 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.405118 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.406425 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qjv9c" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.408222 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443063 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-dbus-socket\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443121 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-ovs-socket\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443179 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6pw\" (UniqueName: \"kubernetes.io/projected/48da33e9-5d62-48a7-b7ff-77aaab4b2d4d-kube-api-access-rg6pw\") pod \"nmstate-metrics-58c85c668d-zx5fj\" (UID: \"48da33e9-5d62-48a7-b7ff-77aaab4b2d4d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443438 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbnn\" (UniqueName: \"kubernetes.io/projected/1065e0bf-8637-4b83-bc10-58b0cd328bd9-kube-api-access-ldbnn\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75gt\" (UniqueName: \"kubernetes.io/projected/6083b87d-c507-4902-a132-528a3bf024d9-kube-api-access-l75gt\") pod \"nmstate-webhook-866bcb46dc-7rv92\" (UID: \"6083b87d-c507-4902-a132-528a3bf024d9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443578 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-nmstate-lock\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.443829 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6083b87d-c507-4902-a132-528a3bf024d9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7rv92\" (UID: \"6083b87d-c507-4902-a132-528a3bf024d9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.464196 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6pw\" (UniqueName: \"kubernetes.io/projected/48da33e9-5d62-48a7-b7ff-77aaab4b2d4d-kube-api-access-rg6pw\") pod \"nmstate-metrics-58c85c668d-zx5fj\" (UID: \"48da33e9-5d62-48a7-b7ff-77aaab4b2d4d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.545787 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l75gt\" (UniqueName: \"kubernetes.io/projected/6083b87d-c507-4902-a132-528a3bf024d9-kube-api-access-l75gt\") pod \"nmstate-webhook-866bcb46dc-7rv92\" (UID: \"6083b87d-c507-4902-a132-528a3bf024d9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.545843 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbnn\" (UniqueName: \"kubernetes.io/projected/1065e0bf-8637-4b83-bc10-58b0cd328bd9-kube-api-access-ldbnn\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.545869 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-nmstate-lock\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.545922 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fb0417-720d-47d8-bbc1-24e0347f5564-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.545949 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6083b87d-c507-4902-a132-528a3bf024d9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7rv92\" (UID: \"6083b87d-c507-4902-a132-528a3bf024d9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11fb0417-720d-47d8-bbc1-24e0347f5564-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546049 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rnq9\" (UniqueName: \"kubernetes.io/projected/11fb0417-720d-47d8-bbc1-24e0347f5564-kube-api-access-4rnq9\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546056 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-nmstate-lock\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546116 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-dbus-socket\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546282 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-ovs-socket\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546345 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-ovs-socket\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.546533 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1065e0bf-8637-4b83-bc10-58b0cd328bd9-dbus-socket\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.555253 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6083b87d-c507-4902-a132-528a3bf024d9-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7rv92\" (UID: \"6083b87d-c507-4902-a132-528a3bf024d9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.576983 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbnn\" (UniqueName: \"kubernetes.io/projected/1065e0bf-8637-4b83-bc10-58b0cd328bd9-kube-api-access-ldbnn\") pod \"nmstate-handler-v8tj2\" (UID: \"1065e0bf-8637-4b83-bc10-58b0cd328bd9\") " pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.581436 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l75gt\" (UniqueName: \"kubernetes.io/projected/6083b87d-c507-4902-a132-528a3bf024d9-kube-api-access-l75gt\") pod \"nmstate-webhook-866bcb46dc-7rv92\" (UID: \"6083b87d-c507-4902-a132-528a3bf024d9\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.594457 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.627670 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.631161 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-559f47f6f8-kplhw"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.632319 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.645537 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.647724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11fb0417-720d-47d8-bbc1-24e0347f5564-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.647763 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rnq9\" (UniqueName: \"kubernetes.io/projected/11fb0417-720d-47d8-bbc1-24e0347f5564-kube-api-access-4rnq9\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.647782 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559f47f6f8-kplhw"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.647956 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fb0417-720d-47d8-bbc1-24e0347f5564-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.649011 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11fb0417-720d-47d8-bbc1-24e0347f5564-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.655010 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fb0417-720d-47d8-bbc1-24e0347f5564-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.668348 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rnq9\" (UniqueName: \"kubernetes.io/projected/11fb0417-720d-47d8-bbc1-24e0347f5564-kube-api-access-4rnq9\") pod \"nmstate-console-plugin-5c78fc5d65-sfqsv\" (UID: \"11fb0417-720d-47d8-bbc1-24e0347f5564\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: W0218 19:31:20.687755 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1065e0bf_8637_4b83_bc10_58b0cd328bd9.slice/crio-adde7d51deae2c480e4ac19aedff5537e78adb310c548a472a73187cd054859c WatchSource:0}: Error finding container adde7d51deae2c480e4ac19aedff5537e78adb310c548a472a73187cd054859c: Status 404 returned error can't find the container with id adde7d51deae2c480e4ac19aedff5537e78adb310c548a472a73187cd054859c Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.723380 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749718 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-service-ca\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749803 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-trusted-ca-bundle\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749844 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-config\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749879 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-serving-cert\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749897 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnt95\" (UniqueName: \"kubernetes.io/projected/58eab3d8-cdec-48ae-8a18-6748e535a36a-kube-api-access-pnt95\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749916 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-oauth-config\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.749966 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-oauth-serving-cert\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851801 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-config\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851854 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-serving-cert\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851872 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnt95\" (UniqueName: \"kubernetes.io/projected/58eab3d8-cdec-48ae-8a18-6748e535a36a-kube-api-access-pnt95\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851889 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-oauth-config\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851936 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-oauth-serving-cert\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851957 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-service-ca\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.851995 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-trusted-ca-bundle\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.853453 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-config\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.853484 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-trusted-ca-bundle\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.854660 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-oauth-serving-cert\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.855736 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58eab3d8-cdec-48ae-8a18-6748e535a36a-service-ca\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.857949 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-oauth-config\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.860703 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58eab3d8-cdec-48ae-8a18-6748e535a36a-console-serving-cert\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.872651 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnt95\" (UniqueName: \"kubernetes.io/projected/58eab3d8-cdec-48ae-8a18-6748e535a36a-kube-api-access-pnt95\") pod \"console-559f47f6f8-kplhw\" (UID: \"58eab3d8-cdec-48ae-8a18-6748e535a36a\") " pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.896408 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v8tj2" event={"ID":"1065e0bf-8637-4b83-bc10-58b0cd328bd9","Type":"ContainerStarted","Data":"adde7d51deae2c480e4ac19aedff5537e78adb310c548a472a73187cd054859c"} Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.954230 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92"] Feb 18 19:31:20 crc kubenswrapper[4754]: I0218 19:31:20.959419 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.017735 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv"] Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.099516 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj"] Feb 18 19:31:21 crc kubenswrapper[4754]: W0218 19:31:21.100600 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48da33e9_5d62_48a7_b7ff_77aaab4b2d4d.slice/crio-215ce1caad76a8fbec24784b28cf82e1ead832d6361d88013c0c50952f6279fa WatchSource:0}: Error finding container 215ce1caad76a8fbec24784b28cf82e1ead832d6361d88013c0c50952f6279fa: Status 404 returned error can't find the container with id 215ce1caad76a8fbec24784b28cf82e1ead832d6361d88013c0c50952f6279fa Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.187777 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-559f47f6f8-kplhw"] Feb 18 19:31:21 crc kubenswrapper[4754]: W0218 19:31:21.199316 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58eab3d8_cdec_48ae_8a18_6748e535a36a.slice/crio-39f4f81fea00a9f5c6d3070145dffea38b7706fc9a5c4a3ec9d0142deda86f61 WatchSource:0}: Error finding container 39f4f81fea00a9f5c6d3070145dffea38b7706fc9a5c4a3ec9d0142deda86f61: Status 404 returned error can't find the container with id 39f4f81fea00a9f5c6d3070145dffea38b7706fc9a5c4a3ec9d0142deda86f61 Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.838659 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.901273 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.908229 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" event={"ID":"6083b87d-c507-4902-a132-528a3bf024d9","Type":"ContainerStarted","Data":"5bff6e16390de02d696f05c3efa9b54740f5d36b19da3f8ef3a40c6029c0e621"} Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.909863 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" event={"ID":"48da33e9-5d62-48a7-b7ff-77aaab4b2d4d","Type":"ContainerStarted","Data":"215ce1caad76a8fbec24784b28cf82e1ead832d6361d88013c0c50952f6279fa"} Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.912442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559f47f6f8-kplhw" event={"ID":"58eab3d8-cdec-48ae-8a18-6748e535a36a","Type":"ContainerStarted","Data":"c8d6012b5638f2011c7a769d145b2001989d8565a90d280ebdeeed5b402c5c00"} Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.912473 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-559f47f6f8-kplhw" event={"ID":"58eab3d8-cdec-48ae-8a18-6748e535a36a","Type":"ContainerStarted","Data":"39f4f81fea00a9f5c6d3070145dffea38b7706fc9a5c4a3ec9d0142deda86f61"} Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.914193 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" event={"ID":"11fb0417-720d-47d8-bbc1-24e0347f5564","Type":"ContainerStarted","Data":"442565dd760e08e480462420dec5f6a72c58ce958aaf7b07749803e53c537ad4"} Feb 18 19:31:21 crc kubenswrapper[4754]: I0218 19:31:21.943761 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-559f47f6f8-kplhw" podStartSLOduration=1.943728237 podStartE2EDuration="1.943728237s" podCreationTimestamp="2026-02-18 19:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:31:21.940792154 +0000 UTC m=+784.391204990" watchObservedRunningTime="2026-02-18 19:31:21.943728237 +0000 UTC m=+784.394141033" Feb 18 19:31:22 crc kubenswrapper[4754]: I0218 19:31:22.640626 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lnx98"] Feb 18 19:31:22 crc kubenswrapper[4754]: I0218 19:31:22.925387 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lnx98" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="registry-server" containerID="cri-o://2c7c8fbf791d97190fdb4a70cd26d364ca2e717699349014c1310a81b26bb595" gracePeriod=2 Feb 18 19:31:23 crc kubenswrapper[4754]: I0218 19:31:23.165961 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:23 crc kubenswrapper[4754]: I0218 19:31:23.166039 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:23 crc kubenswrapper[4754]: I0218 19:31:23.229955 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:23 crc kubenswrapper[4754]: I0218 19:31:23.937411 4754 generic.go:334] "Generic (PLEG): container finished" podID="9616cda5-1936-4a98-83cb-0281199e6126" containerID="2c7c8fbf791d97190fdb4a70cd26d364ca2e717699349014c1310a81b26bb595" exitCode=0 Feb 18 19:31:23 crc kubenswrapper[4754]: I0218 19:31:23.937511 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnx98" event={"ID":"9616cda5-1936-4a98-83cb-0281199e6126","Type":"ContainerDied","Data":"2c7c8fbf791d97190fdb4a70cd26d364ca2e717699349014c1310a81b26bb595"} Feb 18 19:31:23 crc kubenswrapper[4754]: I0218 19:31:23.995394 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.219577 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.316481 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzcvg\" (UniqueName: \"kubernetes.io/projected/9616cda5-1936-4a98-83cb-0281199e6126-kube-api-access-dzcvg\") pod \"9616cda5-1936-4a98-83cb-0281199e6126\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.316532 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-catalog-content\") pod \"9616cda5-1936-4a98-83cb-0281199e6126\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.316605 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-utilities\") pod \"9616cda5-1936-4a98-83cb-0281199e6126\" (UID: \"9616cda5-1936-4a98-83cb-0281199e6126\") " Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.318386 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-utilities" (OuterVolumeSpecName: "utilities") pod "9616cda5-1936-4a98-83cb-0281199e6126" (UID: "9616cda5-1936-4a98-83cb-0281199e6126"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.321865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9616cda5-1936-4a98-83cb-0281199e6126-kube-api-access-dzcvg" (OuterVolumeSpecName: "kube-api-access-dzcvg") pod "9616cda5-1936-4a98-83cb-0281199e6126" (UID: "9616cda5-1936-4a98-83cb-0281199e6126"). InnerVolumeSpecName "kube-api-access-dzcvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.324351 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzcvg\" (UniqueName: \"kubernetes.io/projected/9616cda5-1936-4a98-83cb-0281199e6126-kube-api-access-dzcvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.324383 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.439762 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9616cda5-1936-4a98-83cb-0281199e6126" (UID: "9616cda5-1936-4a98-83cb-0281199e6126"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.526862 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9616cda5-1936-4a98-83cb-0281199e6126-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.946453 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v8tj2" event={"ID":"1065e0bf-8637-4b83-bc10-58b0cd328bd9","Type":"ContainerStarted","Data":"c404353a2495b27ff40cbda8c7a84a0ee1287eb0c42fc5d04eb5ac8069dfc604"} Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.946848 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.949127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" event={"ID":"6083b87d-c507-4902-a132-528a3bf024d9","Type":"ContainerStarted","Data":"b9c419cfdcc8a0955005673bfcee85cf92344ad0a0a4d3c212b548860f9c0e9a"} Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.949790 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.951877 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" event={"ID":"48da33e9-5d62-48a7-b7ff-77aaab4b2d4d","Type":"ContainerStarted","Data":"5e4eed77cb0e01149a8739d9aecc08e911fa03ab4e73fcf8a2d8a0975b730fc2"} Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.955801 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnx98" event={"ID":"9616cda5-1936-4a98-83cb-0281199e6126","Type":"ContainerDied","Data":"1aee808bdac558bdd6572a32f32d7912849ed10eb3dd00aa2f914aa4e7afde0a"} Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.955871 4754 scope.go:117] "RemoveContainer" containerID="2c7c8fbf791d97190fdb4a70cd26d364ca2e717699349014c1310a81b26bb595" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.955816 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnx98" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.958276 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" event={"ID":"11fb0417-720d-47d8-bbc1-24e0347f5564","Type":"ContainerStarted","Data":"0c797127f8d973874647b8aa2fc0edf114ae0845704ac9ed2e701fdaaf843a69"} Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.974670 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v8tj2" podStartSLOduration=1.6240009770000001 podStartE2EDuration="4.974642999s" podCreationTimestamp="2026-02-18 19:31:20 +0000 UTC" firstStartedPulling="2026-02-18 19:31:20.689918882 +0000 UTC m=+783.140331678" lastFinishedPulling="2026-02-18 19:31:24.040560904 +0000 UTC m=+786.490973700" observedRunningTime="2026-02-18 19:31:24.966030487 +0000 UTC m=+787.416443283" watchObservedRunningTime="2026-02-18 19:31:24.974642999 +0000 UTC m=+787.425055805" Feb 18 19:31:24 crc kubenswrapper[4754]: I0218 19:31:24.983960 4754 scope.go:117] "RemoveContainer" containerID="cbd0cab224f65eca3cbb7725a32493fc83f42bc3fd58bf735b5d2382c907fcf9" Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.001227 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" podStartSLOduration=1.945181103 podStartE2EDuration="5.001195058s" podCreationTimestamp="2026-02-18 19:31:20 +0000 UTC" firstStartedPulling="2026-02-18 19:31:20.980055487 +0000 UTC m=+783.430468273" lastFinishedPulling="2026-02-18 19:31:24.036069432 +0000 UTC m=+786.486482228" observedRunningTime="2026-02-18 19:31:24.999828254 +0000 UTC m=+787.450241060" watchObservedRunningTime="2026-02-18 19:31:25.001195058 +0000 UTC m=+787.451607874" Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.023860 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-sfqsv" podStartSLOduration=2.028019615 podStartE2EDuration="5.023834971s" podCreationTimestamp="2026-02-18 19:31:20 +0000 UTC" firstStartedPulling="2026-02-18 19:31:21.018760628 +0000 UTC m=+783.469173424" lastFinishedPulling="2026-02-18 19:31:24.014575974 +0000 UTC m=+786.464988780" observedRunningTime="2026-02-18 19:31:25.020223458 +0000 UTC m=+787.470636264" watchObservedRunningTime="2026-02-18 19:31:25.023834971 +0000 UTC m=+787.474247767" Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.052923 4754 scope.go:117] "RemoveContainer" containerID="44ae8aab4eb7a1ca142b951c0d0b7509d9f33edf3591960a2843461b92da929b" Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.067417 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lnx98"] Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.076338 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lnx98"] Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.643688 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd8cq"] Feb 18 19:31:25 crc kubenswrapper[4754]: I0218 19:31:25.969367 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fd8cq" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="registry-server" containerID="cri-o://9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b" gracePeriod=2 Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.219823 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9616cda5-1936-4a98-83cb-0281199e6126" path="/var/lib/kubelet/pods/9616cda5-1936-4a98-83cb-0281199e6126/volumes" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.338060 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.458841 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-catalog-content\") pod \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.458966 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-utilities\") pod \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.459066 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgm8n\" (UniqueName: \"kubernetes.io/projected/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-kube-api-access-rgm8n\") pod \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\" (UID: \"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8\") " Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.461578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-utilities" (OuterVolumeSpecName: "utilities") pod "18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" (UID: "18d62a42-bba1-4fc4-ae4e-d7495c0a1da8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.472435 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-kube-api-access-rgm8n" (OuterVolumeSpecName: "kube-api-access-rgm8n") pod "18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" (UID: "18d62a42-bba1-4fc4-ae4e-d7495c0a1da8"). InnerVolumeSpecName "kube-api-access-rgm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.524299 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" (UID: "18d62a42-bba1-4fc4-ae4e-d7495c0a1da8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.561089 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.561131 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.561154 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgm8n\" (UniqueName: \"kubernetes.io/projected/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8-kube-api-access-rgm8n\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.980047 4754 generic.go:334] "Generic (PLEG): container finished" podID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerID="9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b" exitCode=0 Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.980116 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd8cq" Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.982301 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd8cq" event={"ID":"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8","Type":"ContainerDied","Data":"9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b"} Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.982411 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd8cq" event={"ID":"18d62a42-bba1-4fc4-ae4e-d7495c0a1da8","Type":"ContainerDied","Data":"87fe89f4a7e6d85e4a71adec90ce2d4f8894c513f68f8f3eb893329f0fe4c242"} Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.982435 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" event={"ID":"48da33e9-5d62-48a7-b7ff-77aaab4b2d4d","Type":"ContainerStarted","Data":"82bcc08d3f1e85edc60aa95d86fd9cf9ecea5d2b56cd310c108b86aa4246a983"} Feb 18 19:31:26 crc kubenswrapper[4754]: I0218 19:31:26.982475 4754 scope.go:117] "RemoveContainer" containerID="9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.010879 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-zx5fj" podStartSLOduration=2.058965463 podStartE2EDuration="7.010833713s" podCreationTimestamp="2026-02-18 19:31:20 +0000 UTC" firstStartedPulling="2026-02-18 19:31:21.103068819 +0000 UTC m=+783.553481615" lastFinishedPulling="2026-02-18 19:31:26.054937069 +0000 UTC m=+788.505349865" observedRunningTime="2026-02-18 19:31:27.00061615 +0000 UTC m=+789.451028966" watchObservedRunningTime="2026-02-18 19:31:27.010833713 +0000 UTC m=+789.461246509" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.011961 4754 scope.go:117] "RemoveContainer" containerID="17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.072727 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd8cq"] Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.079934 4754 scope.go:117] "RemoveContainer" containerID="5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.086027 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fd8cq"] Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.099899 4754 scope.go:117] "RemoveContainer" containerID="9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b" Feb 18 19:31:27 crc kubenswrapper[4754]: E0218 19:31:27.100508 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b\": container with ID starting with 9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b not found: ID does not exist" containerID="9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.100584 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b"} err="failed to get container status \"9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b\": rpc error: code = NotFound desc = could not find container \"9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b\": container with ID starting with 9624fce749cf0657bfe826e263bb89c1562197bc1da17350e64a2b609cbc0e4b not found: ID does not exist" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.100635 4754 scope.go:117] "RemoveContainer" containerID="17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4" Feb 18 19:31:27 crc kubenswrapper[4754]: E0218 19:31:27.101217 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4\": container with ID starting with 17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4 not found: ID does not exist" containerID="17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.101430 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4"} err="failed to get container status \"17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4\": rpc error: code = NotFound desc = could not find container \"17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4\": container with ID starting with 17a5d02c790b4060495552b0715811b000b54c605faaec73f2b5bc8cb93fe2c4 not found: ID does not exist" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.101650 4754 scope.go:117] "RemoveContainer" containerID="5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d" Feb 18 19:31:27 crc kubenswrapper[4754]: E0218 19:31:27.102187 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d\": container with ID starting with 5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d not found: ID does not exist" containerID="5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d" Feb 18 19:31:27 crc kubenswrapper[4754]: I0218 19:31:27.102224 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d"} err="failed to get container status \"5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d\": rpc error: code = NotFound desc = could not find container \"5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d\": container with ID starting with 5c1033901206c0abc6c9003d629e220a9ae186696fe56032160a748b38519b4d not found: ID does not exist" Feb 18 19:31:28 crc kubenswrapper[4754]: I0218 19:31:28.217371 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" path="/var/lib/kubelet/pods/18d62a42-bba1-4fc4-ae4e-d7495c0a1da8/volumes" Feb 18 19:31:30 crc kubenswrapper[4754]: I0218 19:31:30.674931 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v8tj2" Feb 18 19:31:30 crc kubenswrapper[4754]: I0218 19:31:30.960407 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:30 crc kubenswrapper[4754]: I0218 19:31:30.960471 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:30 crc kubenswrapper[4754]: I0218 19:31:30.966929 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:31 crc kubenswrapper[4754]: I0218 19:31:31.022372 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-559f47f6f8-kplhw" Feb 18 19:31:31 crc kubenswrapper[4754]: I0218 19:31:31.099863 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-72dh6"] Feb 18 19:31:40 crc kubenswrapper[4754]: I0218 19:31:40.634804 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7rv92" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.020223 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq"] Feb 18 19:31:54 crc kubenswrapper[4754]: E0218 19:31:54.021270 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="registry-server" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021292 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="registry-server" Feb 18 19:31:54 crc kubenswrapper[4754]: E0218 19:31:54.021305 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="registry-server" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021314 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="registry-server" Feb 18 19:31:54 crc kubenswrapper[4754]: E0218 19:31:54.021324 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="extract-utilities" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021332 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="extract-utilities" Feb 18 19:31:54 crc kubenswrapper[4754]: E0218 19:31:54.021346 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="extract-content" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021355 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="extract-content" Feb 18 19:31:54 crc kubenswrapper[4754]: E0218 19:31:54.021367 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="extract-content" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021375 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="extract-content" Feb 18 19:31:54 crc kubenswrapper[4754]: E0218 19:31:54.021388 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="extract-utilities" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021396 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="extract-utilities" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021545 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9616cda5-1936-4a98-83cb-0281199e6126" containerName="registry-server" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.021567 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d62a42-bba1-4fc4-ae4e-d7495c0a1da8" containerName="registry-server" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.022525 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.025252 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.029803 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq"] Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.212468 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.212627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.212731 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5skw\" (UniqueName: \"kubernetes.io/projected/cf899926-19d0-4d93-90da-22d0b0d4fbc2-kube-api-access-x5skw\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.314303 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5skw\" (UniqueName: \"kubernetes.io/projected/cf899926-19d0-4d93-90da-22d0b0d4fbc2-kube-api-access-x5skw\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.314364 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.314459 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.317852 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.317975 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.344584 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5skw\" (UniqueName: \"kubernetes.io/projected/cf899926-19d0-4d93-90da-22d0b0d4fbc2-kube-api-access-x5skw\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.355699 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:54 crc kubenswrapper[4754]: I0218 19:31:54.574255 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq"] Feb 18 19:31:55 crc kubenswrapper[4754]: I0218 19:31:55.203884 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerID="c63cc78664f947b07a5d4c935f8d9ce0e344c59192b19c28b30b9b93198e8e8d" exitCode=0 Feb 18 19:31:55 crc kubenswrapper[4754]: I0218 19:31:55.204026 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" event={"ID":"cf899926-19d0-4d93-90da-22d0b0d4fbc2","Type":"ContainerDied","Data":"c63cc78664f947b07a5d4c935f8d9ce0e344c59192b19c28b30b9b93198e8e8d"} Feb 18 19:31:55 crc kubenswrapper[4754]: I0218 19:31:55.204324 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" event={"ID":"cf899926-19d0-4d93-90da-22d0b0d4fbc2","Type":"ContainerStarted","Data":"9829da152b5fdcb8021fa1c1f7f19e9327004405554bedf760d7bb6187e9ffb7"} Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.189453 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-72dh6" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerName="console" containerID="cri-o://039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a" gracePeriod=15 Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.600971 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-72dh6_a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc/console/0.log" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.601417 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.752243 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-config\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.752363 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv6rj\" (UniqueName: \"kubernetes.io/projected/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-kube-api-access-kv6rj\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.752416 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-service-ca\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.752494 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-oauth-serving-cert\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.752536 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-oauth-config\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.753457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-service-ca" (OuterVolumeSpecName: "service-ca") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.753872 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-config" (OuterVolumeSpecName: "console-config") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.754005 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.754769 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.754827 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-trusted-ca-bundle\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.754907 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-serving-cert\") pod \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\" (UID: \"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc\") " Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.756037 4754 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.756071 4754 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.756091 4754 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.756112 4754 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.768111 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.768620 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-kube-api-access-kv6rj" (OuterVolumeSpecName: "kube-api-access-kv6rj") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "kube-api-access-kv6rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.768633 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" (UID: "a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.857621 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv6rj\" (UniqueName: \"kubernetes.io/projected/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-kube-api-access-kv6rj\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.857680 4754 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:56 crc kubenswrapper[4754]: I0218 19:31:56.857692 4754 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.222470 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerID="bda192767c453ed9d9e59a57de8584bf8e71011bf924818adce52a99715dfa34" exitCode=0 Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.222572 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" event={"ID":"cf899926-19d0-4d93-90da-22d0b0d4fbc2","Type":"ContainerDied","Data":"bda192767c453ed9d9e59a57de8584bf8e71011bf924818adce52a99715dfa34"} Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.225396 4754 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-72dh6_a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc/console/0.log" Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.225460 4754 generic.go:334] "Generic (PLEG): container finished" podID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerID="039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a" exitCode=2 Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.225515 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-72dh6" event={"ID":"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc","Type":"ContainerDied","Data":"039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a"} Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.225561 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-72dh6" event={"ID":"a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc","Type":"ContainerDied","Data":"accd064468f56473741ca76a84041b3bee5a7ddc1cd6578da528a9a3430065e8"} Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.225590 4754 scope.go:117] "RemoveContainer" containerID="039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a" Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.225769 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-72dh6" Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.270123 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-72dh6"] Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.270375 4754 scope.go:117] "RemoveContainer" containerID="039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a" Feb 18 19:31:57 crc kubenswrapper[4754]: E0218 19:31:57.273800 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a\": container with ID starting with 039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a not found: ID does not exist" containerID="039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a" Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.273879 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a"} err="failed to get container status \"039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a\": rpc error: code = NotFound desc = could not find container \"039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a\": container with ID starting with 039469d03cdd94cdab030b21e0e00b26b3eb6f619f496b7a4733674dcfbe031a not found: ID does not exist" Feb 18 19:31:57 crc kubenswrapper[4754]: I0218 19:31:57.290586 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-72dh6"] Feb 18 19:31:58 crc kubenswrapper[4754]: I0218 19:31:58.222430 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" path="/var/lib/kubelet/pods/a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc/volumes" Feb 18 19:31:58 crc kubenswrapper[4754]: I0218 19:31:58.236014 4754 generic.go:334] "Generic (PLEG): container finished" podID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerID="9d5284e2f8f4aaf70c6d73d4f164e930b88b09776747f98a87f6b3b0a2634abf" exitCode=0 Feb 18 19:31:58 crc kubenswrapper[4754]: I0218 19:31:58.236133 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" event={"ID":"cf899926-19d0-4d93-90da-22d0b0d4fbc2","Type":"ContainerDied","Data":"9d5284e2f8f4aaf70c6d73d4f164e930b88b09776747f98a87f6b3b0a2634abf"} Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.527720 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.715904 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-bundle\") pod \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.716561 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5skw\" (UniqueName: \"kubernetes.io/projected/cf899926-19d0-4d93-90da-22d0b0d4fbc2-kube-api-access-x5skw\") pod \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.716619 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-util\") pod \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\" (UID: \"cf899926-19d0-4d93-90da-22d0b0d4fbc2\") " Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.717980 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-bundle" (OuterVolumeSpecName: "bundle") pod "cf899926-19d0-4d93-90da-22d0b0d4fbc2" (UID: "cf899926-19d0-4d93-90da-22d0b0d4fbc2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.725046 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf899926-19d0-4d93-90da-22d0b0d4fbc2-kube-api-access-x5skw" (OuterVolumeSpecName: "kube-api-access-x5skw") pod "cf899926-19d0-4d93-90da-22d0b0d4fbc2" (UID: "cf899926-19d0-4d93-90da-22d0b0d4fbc2"). InnerVolumeSpecName "kube-api-access-x5skw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.750296 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-util" (OuterVolumeSpecName: "util") pod "cf899926-19d0-4d93-90da-22d0b0d4fbc2" (UID: "cf899926-19d0-4d93-90da-22d0b0d4fbc2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.817966 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.818025 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf899926-19d0-4d93-90da-22d0b0d4fbc2-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:59 crc kubenswrapper[4754]: I0218 19:31:59.818034 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5skw\" (UniqueName: \"kubernetes.io/projected/cf899926-19d0-4d93-90da-22d0b0d4fbc2-kube-api-access-x5skw\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:00 crc kubenswrapper[4754]: I0218 19:32:00.259089 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" event={"ID":"cf899926-19d0-4d93-90da-22d0b0d4fbc2","Type":"ContainerDied","Data":"9829da152b5fdcb8021fa1c1f7f19e9327004405554bedf760d7bb6187e9ffb7"} Feb 18 19:32:00 crc kubenswrapper[4754]: I0218 19:32:00.259214 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9829da152b5fdcb8021fa1c1f7f19e9327004405554bedf760d7bb6187e9ffb7" Feb 18 19:32:00 crc kubenswrapper[4754]: I0218 19:32:00.259282 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2136njkq" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.020166 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt"] Feb 18 19:32:09 crc kubenswrapper[4754]: E0218 19:32:09.021979 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="extract" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.022109 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="extract" Feb 18 19:32:09 crc kubenswrapper[4754]: E0218 19:32:09.022195 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="util" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.022248 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="util" Feb 18 19:32:09 crc kubenswrapper[4754]: E0218 19:32:09.022311 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerName="console" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.022384 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerName="console" Feb 18 19:32:09 crc kubenswrapper[4754]: E0218 19:32:09.022439 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="pull" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.022490 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="pull" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.022651 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a5b76c-9c04-44f1-b6c7-8f09e0ffc4fc" containerName="console" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.022723 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf899926-19d0-4d93-90da-22d0b0d4fbc2" containerName="extract" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.023513 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.025256 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.025619 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.025644 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xtxzl" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.025755 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.026016 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.044265 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt"] Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.095589 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2lb\" (UniqueName: \"kubernetes.io/projected/e8785c7d-9677-488d-ab45-64ef27517110-kube-api-access-xr2lb\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.095682 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8785c7d-9677-488d-ab45-64ef27517110-webhook-cert\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.095732 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8785c7d-9677-488d-ab45-64ef27517110-apiservice-cert\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.197725 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8785c7d-9677-488d-ab45-64ef27517110-apiservice-cert\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.198313 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2lb\" (UniqueName: \"kubernetes.io/projected/e8785c7d-9677-488d-ab45-64ef27517110-kube-api-access-xr2lb\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.198553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8785c7d-9677-488d-ab45-64ef27517110-webhook-cert\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.205372 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8785c7d-9677-488d-ab45-64ef27517110-webhook-cert\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.206702 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8785c7d-9677-488d-ab45-64ef27517110-apiservice-cert\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.217966 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2lb\" (UniqueName: \"kubernetes.io/projected/e8785c7d-9677-488d-ab45-64ef27517110-kube-api-access-xr2lb\") pod \"metallb-operator-controller-manager-7c9bdcc7c9-242nt\" (UID: \"e8785c7d-9677-488d-ab45-64ef27517110\") " pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.340985 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.349797 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp"] Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.350823 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.353842 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.357241 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-g492v" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.357982 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.372734 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp"] Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.402507 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-webhook-cert\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.402554 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-apiservice-cert\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.402641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnz28\" (UniqueName: \"kubernetes.io/projected/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-kube-api-access-qnz28\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.503327 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnz28\" (UniqueName: \"kubernetes.io/projected/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-kube-api-access-qnz28\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.503784 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-webhook-cert\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.503812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-apiservice-cert\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.510550 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-webhook-cert\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.510586 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-apiservice-cert\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.528114 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnz28\" (UniqueName: \"kubernetes.io/projected/29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f-kube-api-access-qnz28\") pod \"metallb-operator-webhook-server-85fc6d94f-wvhnp\" (UID: \"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f\") " pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.613515 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt"] Feb 18 19:32:09 crc kubenswrapper[4754]: W0218 19:32:09.622388 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8785c7d_9677_488d_ab45_64ef27517110.slice/crio-408368f61dee751fc61ac57b34a5bad845210fba7520ab7b8dcf9f8916cbdc34 WatchSource:0}: Error finding container 408368f61dee751fc61ac57b34a5bad845210fba7520ab7b8dcf9f8916cbdc34: Status 404 returned error can't find the container with id 408368f61dee751fc61ac57b34a5bad845210fba7520ab7b8dcf9f8916cbdc34 Feb 18 19:32:09 crc kubenswrapper[4754]: I0218 19:32:09.723519 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:10 crc kubenswrapper[4754]: I0218 19:32:10.128520 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp"] Feb 18 19:32:10 crc kubenswrapper[4754]: W0218 19:32:10.133829 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ea22ea_e1bc_4c19_b3a4_f8ea5a2cdb1f.slice/crio-afefa9ee2dd30704e8eef37c5a595eebe9ff0d12b4d5c8b70a0d753ac5359587 WatchSource:0}: Error finding container afefa9ee2dd30704e8eef37c5a595eebe9ff0d12b4d5c8b70a0d753ac5359587: Status 404 returned error can't find the container with id afefa9ee2dd30704e8eef37c5a595eebe9ff0d12b4d5c8b70a0d753ac5359587 Feb 18 19:32:10 crc kubenswrapper[4754]: I0218 19:32:10.341314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" event={"ID":"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f","Type":"ContainerStarted","Data":"afefa9ee2dd30704e8eef37c5a595eebe9ff0d12b4d5c8b70a0d753ac5359587"} Feb 18 19:32:10 crc kubenswrapper[4754]: I0218 19:32:10.342446 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" event={"ID":"e8785c7d-9677-488d-ab45-64ef27517110","Type":"ContainerStarted","Data":"408368f61dee751fc61ac57b34a5bad845210fba7520ab7b8dcf9f8916cbdc34"} Feb 18 19:32:13 crc kubenswrapper[4754]: I0218 19:32:13.371795 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" event={"ID":"e8785c7d-9677-488d-ab45-64ef27517110","Type":"ContainerStarted","Data":"cd02d533c2f6ff5c9ff2bebfc2562d10243a4144b0f5dd630b759289fb182951"} Feb 18 19:32:13 crc kubenswrapper[4754]: I0218 19:32:13.372397 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:13 crc kubenswrapper[4754]: I0218 19:32:13.399888 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" podStartSLOduration=1.041229276 podStartE2EDuration="4.399869982s" podCreationTimestamp="2026-02-18 19:32:09 +0000 UTC" firstStartedPulling="2026-02-18 19:32:09.626700355 +0000 UTC m=+832.077113151" lastFinishedPulling="2026-02-18 19:32:12.985341071 +0000 UTC m=+835.435753857" observedRunningTime="2026-02-18 19:32:13.393713307 +0000 UTC m=+835.844126123" watchObservedRunningTime="2026-02-18 19:32:13.399869982 +0000 UTC m=+835.850282778" Feb 18 19:32:15 crc kubenswrapper[4754]: I0218 19:32:15.387836 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" event={"ID":"29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f","Type":"ContainerStarted","Data":"60546722c1979e1200f7bc898493bd89658834bfd798e21d985d5c3dbd27a6aa"} Feb 18 19:32:15 crc kubenswrapper[4754]: I0218 19:32:15.388215 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:15 crc kubenswrapper[4754]: I0218 19:32:15.408745 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" podStartSLOduration=1.837348449 podStartE2EDuration="6.408717702s" podCreationTimestamp="2026-02-18 19:32:09 +0000 UTC" firstStartedPulling="2026-02-18 19:32:10.137485914 +0000 UTC m=+832.587898710" lastFinishedPulling="2026-02-18 19:32:14.708855167 +0000 UTC m=+837.159267963" observedRunningTime="2026-02-18 19:32:15.405911314 +0000 UTC m=+837.856324150" watchObservedRunningTime="2026-02-18 19:32:15.408717702 +0000 UTC m=+837.859130498" Feb 18 19:32:29 crc kubenswrapper[4754]: I0218 19:32:29.730580 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" Feb 18 19:32:49 crc kubenswrapper[4754]: I0218 19:32:49.344917 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7c9bdcc7c9-242nt" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.158250 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-g26mw"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.161780 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.165205 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.165442 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.165750 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.166234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.167027 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-krv94" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.169201 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.178318 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.250533 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g9xff"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.251522 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.254015 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-h2wmb" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.254044 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.254523 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.254700 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.269551 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-2fxw5"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.270946 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.274003 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.291602 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-2fxw5"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316258 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9342c20-c37f-45ee-b3d7-e56929cc39e5-metrics-certs\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316348 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd5c038-403e-489a-9c42-a975ff527313-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-llqrp\" (UID: \"cbd5c038-403e-489a-9c42-a975ff527313\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316380 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-reloader\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316403 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-conf\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316446 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-metrics\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316535 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-sockets\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316713 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbzl\" (UniqueName: \"kubernetes.io/projected/cbd5c038-403e-489a-9c42-a975ff527313-kube-api-access-fzbzl\") pod \"frr-k8s-webhook-server-78b44bf5bb-llqrp\" (UID: \"cbd5c038-403e-489a-9c42-a975ff527313\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316815 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-startup\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.316867 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnmx\" (UniqueName: \"kubernetes.io/projected/f9342c20-c37f-45ee-b3d7-e56929cc39e5-kube-api-access-vfnmx\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418008 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/699191b9-e719-4c3e-82da-cca400d49a6b-metrics-certs\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418066 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk49j\" (UniqueName: \"kubernetes.io/projected/cbd0c308-0594-4a99-a764-353e804a0614-kube-api-access-pk49j\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418205 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-sockets\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418240 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cbd0c308-0594-4a99-a764-353e804a0614-metallb-excludel2\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418271 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzbzl\" (UniqueName: \"kubernetes.io/projected/cbd5c038-403e-489a-9c42-a975ff527313-kube-api-access-fzbzl\") pod \"frr-k8s-webhook-server-78b44bf5bb-llqrp\" (UID: \"cbd5c038-403e-489a-9c42-a975ff527313\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418299 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-startup\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418478 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfnmx\" (UniqueName: \"kubernetes.io/projected/f9342c20-c37f-45ee-b3d7-e56929cc39e5-kube-api-access-vfnmx\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418552 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9342c20-c37f-45ee-b3d7-e56929cc39e5-metrics-certs\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418628 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd5c038-403e-489a-9c42-a975ff527313-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-llqrp\" (UID: \"cbd5c038-403e-489a-9c42-a975ff527313\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418674 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-reloader\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418704 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-conf\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418738 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-metrics\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418785 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/699191b9-e719-4c3e-82da-cca400d49a6b-cert\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418840 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418862 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-metrics-certs\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418907 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxp9\" (UniqueName: \"kubernetes.io/projected/699191b9-e719-4c3e-82da-cca400d49a6b-kube-api-access-rnxp9\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.419042 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-reloader\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.418881 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-sockets\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.419162 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-conf\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.419491 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f9342c20-c37f-45ee-b3d7-e56929cc39e5-metrics\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.420477 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f9342c20-c37f-45ee-b3d7-e56929cc39e5-frr-startup\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.435879 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd5c038-403e-489a-9c42-a975ff527313-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-llqrp\" (UID: \"cbd5c038-403e-489a-9c42-a975ff527313\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.440640 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9342c20-c37f-45ee-b3d7-e56929cc39e5-metrics-certs\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.442838 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzbzl\" (UniqueName: \"kubernetes.io/projected/cbd5c038-403e-489a-9c42-a975ff527313-kube-api-access-fzbzl\") pod \"frr-k8s-webhook-server-78b44bf5bb-llqrp\" (UID: \"cbd5c038-403e-489a-9c42-a975ff527313\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.447525 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfnmx\" (UniqueName: \"kubernetes.io/projected/f9342c20-c37f-45ee-b3d7-e56929cc39e5-kube-api-access-vfnmx\") pod \"frr-k8s-g26mw\" (UID: \"f9342c20-c37f-45ee-b3d7-e56929cc39e5\") " pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.482879 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g26mw" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.490433 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520375 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/699191b9-e719-4c3e-82da-cca400d49a6b-cert\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520437 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520457 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-metrics-certs\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520482 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxp9\" (UniqueName: \"kubernetes.io/projected/699191b9-e719-4c3e-82da-cca400d49a6b-kube-api-access-rnxp9\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520510 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/699191b9-e719-4c3e-82da-cca400d49a6b-metrics-certs\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520530 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk49j\" (UniqueName: \"kubernetes.io/projected/cbd0c308-0594-4a99-a764-353e804a0614-kube-api-access-pk49j\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.520584 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cbd0c308-0594-4a99-a764-353e804a0614-metallb-excludel2\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.521503 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cbd0c308-0594-4a99-a764-353e804a0614-metallb-excludel2\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: E0218 19:32:50.521690 4754 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:32:50 crc kubenswrapper[4754]: E0218 19:32:50.521766 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist podName:cbd0c308-0594-4a99-a764-353e804a0614 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:51.021744133 +0000 UTC m=+873.472156929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist") pod "speaker-g9xff" (UID: "cbd0c308-0594-4a99-a764-353e804a0614") : secret "metallb-memberlist" not found Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.527630 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-metrics-certs\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.528381 4754 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.529581 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/699191b9-e719-4c3e-82da-cca400d49a6b-metrics-certs\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.537770 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/699191b9-e719-4c3e-82da-cca400d49a6b-cert\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.545299 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk49j\" (UniqueName: \"kubernetes.io/projected/cbd0c308-0594-4a99-a764-353e804a0614-kube-api-access-pk49j\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.545529 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxp9\" (UniqueName: \"kubernetes.io/projected/699191b9-e719-4c3e-82da-cca400d49a6b-kube-api-access-rnxp9\") pod \"controller-69bbfbf88f-2fxw5\" (UID: \"699191b9-e719-4c3e-82da-cca400d49a6b\") " pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.586485 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.804713 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp"] Feb 18 19:32:50 crc kubenswrapper[4754]: I0218 19:32:50.978754 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-2fxw5"] Feb 18 19:32:50 crc kubenswrapper[4754]: W0218 19:32:50.984741 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod699191b9_e719_4c3e_82da_cca400d49a6b.slice/crio-f0193d4b862f1136df56b15d82e5fbaee7c37d0024cf89f752fd5e0ab8e93db3 WatchSource:0}: Error finding container f0193d4b862f1136df56b15d82e5fbaee7c37d0024cf89f752fd5e0ab8e93db3: Status 404 returned error can't find the container with id f0193d4b862f1136df56b15d82e5fbaee7c37d0024cf89f752fd5e0ab8e93db3 Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.039658 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:51 crc kubenswrapper[4754]: E0218 19:32:51.039868 4754 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:32:51 crc kubenswrapper[4754]: E0218 19:32:51.039947 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist podName:cbd0c308-0594-4a99-a764-353e804a0614 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:52.03992625 +0000 UTC m=+874.490339046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist") pod "speaker-g9xff" (UID: "cbd0c308-0594-4a99-a764-353e804a0614") : secret "metallb-memberlist" not found Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.655177 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"780ab9a05fb8119736dd7efbf74b75a9aa7c5f3bda759d9f871daa8b6017a954"} Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.657244 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-2fxw5" event={"ID":"699191b9-e719-4c3e-82da-cca400d49a6b","Type":"ContainerStarted","Data":"a5cf046d5f52162943c8d5e3e3afb7946d6d67589953231fe20b7845e080883e"} Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.657295 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-2fxw5" event={"ID":"699191b9-e719-4c3e-82da-cca400d49a6b","Type":"ContainerStarted","Data":"bc79e4afdc335400137f29d618fbd066f086eef229afea80393f1695921b2ca0"} Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.657305 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-2fxw5" event={"ID":"699191b9-e719-4c3e-82da-cca400d49a6b","Type":"ContainerStarted","Data":"f0193d4b862f1136df56b15d82e5fbaee7c37d0024cf89f752fd5e0ab8e93db3"} Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.657351 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.658816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" event={"ID":"cbd5c038-403e-489a-9c42-a975ff527313","Type":"ContainerStarted","Data":"9a2b54efed1cb132aefa8a35ae52c0fd601a8f19dc08d0e196d6dab9e83764fc"} Feb 18 19:32:51 crc kubenswrapper[4754]: I0218 19:32:51.679987 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-2fxw5" podStartSLOduration=1.679961585 podStartE2EDuration="1.679961585s" podCreationTimestamp="2026-02-18 19:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:32:51.674820115 +0000 UTC m=+874.125232981" watchObservedRunningTime="2026-02-18 19:32:51.679961585 +0000 UTC m=+874.130374381" Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.054673 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.064515 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cbd0c308-0594-4a99-a764-353e804a0614-memberlist\") pod \"speaker-g9xff\" (UID: \"cbd0c308-0594-4a99-a764-353e804a0614\") " pod="metallb-system/speaker-g9xff" Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.065360 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9xff" Feb 18 19:32:52 crc kubenswrapper[4754]: W0218 19:32:52.103851 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd0c308_0594_4a99_a764_353e804a0614.slice/crio-3c27029b67bc26f7de139c7ef3e8fd13a2ef524779a51fe28052bbcabd0818f1 WatchSource:0}: Error finding container 3c27029b67bc26f7de139c7ef3e8fd13a2ef524779a51fe28052bbcabd0818f1: Status 404 returned error can't find the container with id 3c27029b67bc26f7de139c7ef3e8fd13a2ef524779a51fe28052bbcabd0818f1 Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.673406 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9xff" event={"ID":"cbd0c308-0594-4a99-a764-353e804a0614","Type":"ContainerStarted","Data":"b6694b9c24e1f9b7817a10ab8e431bb5e59a94bf847c80bc34f2e641477f9b4d"} Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.673971 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9xff" event={"ID":"cbd0c308-0594-4a99-a764-353e804a0614","Type":"ContainerStarted","Data":"5947d35be3cfc0c29b2b17fdb09762d51cdeb2d1bcd7243337e71a7899dcf6c4"} Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.673986 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9xff" event={"ID":"cbd0c308-0594-4a99-a764-353e804a0614","Type":"ContainerStarted","Data":"3c27029b67bc26f7de139c7ef3e8fd13a2ef524779a51fe28052bbcabd0818f1"} Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.674181 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g9xff" Feb 18 19:32:52 crc kubenswrapper[4754]: I0218 19:32:52.696437 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g9xff" podStartSLOduration=2.696413351 podStartE2EDuration="2.696413351s" podCreationTimestamp="2026-02-18 19:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:32:52.69380822 +0000 UTC m=+875.144221016" watchObservedRunningTime="2026-02-18 19:32:52.696413351 +0000 UTC m=+875.146826147" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.149650 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fftwg"] Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.151947 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.160081 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fftwg"] Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.285916 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-catalog-content\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.285982 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xbxx\" (UniqueName: \"kubernetes.io/projected/328e7255-9eab-4197-9621-a3d2853a1f8a-kube-api-access-2xbxx\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.286089 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-utilities\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.388047 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-utilities\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.388188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-catalog-content\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.388219 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xbxx\" (UniqueName: \"kubernetes.io/projected/328e7255-9eab-4197-9621-a3d2853a1f8a-kube-api-access-2xbxx\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.388891 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-utilities\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.388918 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-catalog-content\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.411309 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xbxx\" (UniqueName: \"kubernetes.io/projected/328e7255-9eab-4197-9621-a3d2853a1f8a-kube-api-access-2xbxx\") pod \"certified-operators-fftwg\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.473842 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.754951 4754 generic.go:334] "Generic (PLEG): container finished" podID="f9342c20-c37f-45ee-b3d7-e56929cc39e5" containerID="d798a95f721e2bf18bf5936dc3d3f86d1e7b499d25331d06c0f97fd64b54e32f" exitCode=0 Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.755136 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerDied","Data":"d798a95f721e2bf18bf5936dc3d3f86d1e7b499d25331d06c0f97fd64b54e32f"} Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.770039 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" event={"ID":"cbd5c038-403e-489a-9c42-a975ff527313","Type":"ContainerStarted","Data":"5884e48c6eddf490ea9981a059650697c6d2132c4b36fd9ae3fc57eaff56dba1"} Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.770372 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:32:59 crc kubenswrapper[4754]: I0218 19:32:59.803134 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" podStartSLOduration=2.017039124 podStartE2EDuration="9.80310666s" podCreationTimestamp="2026-02-18 19:32:50 +0000 UTC" firstStartedPulling="2026-02-18 19:32:50.878307616 +0000 UTC m=+873.328720402" lastFinishedPulling="2026-02-18 19:32:58.664375142 +0000 UTC m=+881.114787938" observedRunningTime="2026-02-18 19:32:59.796474993 +0000 UTC m=+882.246887789" watchObservedRunningTime="2026-02-18 19:32:59.80310666 +0000 UTC m=+882.253519456" Feb 18 19:33:00 crc kubenswrapper[4754]: I0218 19:33:00.008696 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fftwg"] Feb 18 19:33:00 crc kubenswrapper[4754]: I0218 19:33:00.781926 4754 generic.go:334] "Generic (PLEG): container finished" podID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerID="039b6f6939ec25e041f777bbcf694c9d0a251a90267ee5fc2585268b7152ccc3" exitCode=0 Feb 18 19:33:00 crc kubenswrapper[4754]: I0218 19:33:00.782045 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fftwg" event={"ID":"328e7255-9eab-4197-9621-a3d2853a1f8a","Type":"ContainerDied","Data":"039b6f6939ec25e041f777bbcf694c9d0a251a90267ee5fc2585268b7152ccc3"} Feb 18 19:33:00 crc kubenswrapper[4754]: I0218 19:33:00.782372 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fftwg" event={"ID":"328e7255-9eab-4197-9621-a3d2853a1f8a","Type":"ContainerStarted","Data":"0ffce40822db18fdb7a009458123c399bb20de007ca91832699a01c8ce2d8626"} Feb 18 19:33:00 crc kubenswrapper[4754]: I0218 19:33:00.785659 4754 generic.go:334] "Generic (PLEG): container finished" podID="f9342c20-c37f-45ee-b3d7-e56929cc39e5" containerID="3c0e5c5ee88e22ec9ed04df29e348699f65058ce26e59dda17fd11fac0442299" exitCode=0 Feb 18 19:33:00 crc kubenswrapper[4754]: I0218 19:33:00.785695 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerDied","Data":"3c0e5c5ee88e22ec9ed04df29e348699f65058ce26e59dda17fd11fac0442299"} Feb 18 19:33:01 crc kubenswrapper[4754]: I0218 19:33:01.797834 4754 generic.go:334] "Generic (PLEG): container finished" podID="f9342c20-c37f-45ee-b3d7-e56929cc39e5" containerID="fd6d9ad65f85f937190e624d8bd78540bb13591a64073136f8084081b8911167" exitCode=0 Feb 18 19:33:01 crc kubenswrapper[4754]: I0218 19:33:01.797938 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerDied","Data":"fd6d9ad65f85f937190e624d8bd78540bb13591a64073136f8084081b8911167"} Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.069676 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g9xff" Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.817286 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"5260df97f0c548debcbec0b1c423ac92cf83d3f4052634be333c51d6c6c7ddf7"} Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.817356 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"36491827fa9f5d5ab5e93e4ddb36bf023c7004ccb533500984da20046f0963e1"} Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.817368 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"ed59cb5026897f5be346831e442859d64acc6ee602bd169110d51f492e7994ef"} Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.817379 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"33f4132b84be6adfbbba4d3f8f3cf1e0f0ce2102cdbe3b36a81e7c407c2dccf6"} Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.817393 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"bf6bae6cb137ae0682a9707ba362373c84fe78ef0fcfcf0685bc258df62e0bbb"} Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.821418 4754 generic.go:334] "Generic (PLEG): container finished" podID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerID="18690acbdbbcd36f4794f0a9c7d4f662f20a4698610ec3f1db4e29bf82cb8a53" exitCode=0 Feb 18 19:33:02 crc kubenswrapper[4754]: I0218 19:33:02.821490 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fftwg" event={"ID":"328e7255-9eab-4197-9621-a3d2853a1f8a","Type":"ContainerDied","Data":"18690acbdbbcd36f4794f0a9c7d4f662f20a4698610ec3f1db4e29bf82cb8a53"} Feb 18 19:33:03 crc kubenswrapper[4754]: I0218 19:33:03.834708 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fftwg" event={"ID":"328e7255-9eab-4197-9621-a3d2853a1f8a","Type":"ContainerStarted","Data":"2e81b33d3bfd0ad830d7a27039aa8964b6bac0f54c92a2e927d43d82f17ed663"} Feb 18 19:33:03 crc kubenswrapper[4754]: I0218 19:33:03.841322 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g26mw" event={"ID":"f9342c20-c37f-45ee-b3d7-e56929cc39e5","Type":"ContainerStarted","Data":"cc29e5fba27b19fa9931af7454675aa30a86288a7c0023592143056f8b2377ea"} Feb 18 19:33:03 crc kubenswrapper[4754]: I0218 19:33:03.842007 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-g26mw" Feb 18 19:33:03 crc kubenswrapper[4754]: I0218 19:33:03.919361 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fftwg" podStartSLOduration=2.469057744 podStartE2EDuration="4.91933254s" podCreationTimestamp="2026-02-18 19:32:59 +0000 UTC" firstStartedPulling="2026-02-18 19:33:00.785222088 +0000 UTC m=+883.235634894" lastFinishedPulling="2026-02-18 19:33:03.235496864 +0000 UTC m=+885.685909690" observedRunningTime="2026-02-18 19:33:03.873392603 +0000 UTC m=+886.323805399" watchObservedRunningTime="2026-02-18 19:33:03.91933254 +0000 UTC m=+886.369745336" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.483222 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-g26mw" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.537272 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-g26mw" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.564984 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-g26mw" podStartSLOduration=7.6225553569999995 podStartE2EDuration="15.564959403s" podCreationTimestamp="2026-02-18 19:32:50 +0000 UTC" firstStartedPulling="2026-02-18 19:32:50.701235882 +0000 UTC m=+873.151648688" lastFinishedPulling="2026-02-18 19:32:58.643639938 +0000 UTC m=+881.094052734" observedRunningTime="2026-02-18 19:33:03.923562842 +0000 UTC m=+886.373975638" watchObservedRunningTime="2026-02-18 19:33:05.564959403 +0000 UTC m=+888.015372199" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.731433 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k8s98"] Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.732608 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.735206 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-srkcf" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.735249 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.735442 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.747190 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k8s98"] Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.813752 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84m7v\" (UniqueName: \"kubernetes.io/projected/691e4b26-f211-4df5-814f-a43f30d686dd-kube-api-access-84m7v\") pod \"openstack-operator-index-k8s98\" (UID: \"691e4b26-f211-4df5-814f-a43f30d686dd\") " pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.914959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84m7v\" (UniqueName: \"kubernetes.io/projected/691e4b26-f211-4df5-814f-a43f30d686dd-kube-api-access-84m7v\") pod \"openstack-operator-index-k8s98\" (UID: \"691e4b26-f211-4df5-814f-a43f30d686dd\") " pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:05 crc kubenswrapper[4754]: I0218 19:33:05.940086 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84m7v\" (UniqueName: \"kubernetes.io/projected/691e4b26-f211-4df5-814f-a43f30d686dd-kube-api-access-84m7v\") pod \"openstack-operator-index-k8s98\" (UID: \"691e4b26-f211-4df5-814f-a43f30d686dd\") " pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:06 crc kubenswrapper[4754]: I0218 19:33:06.093324 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:06 crc kubenswrapper[4754]: I0218 19:33:06.604655 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k8s98"] Feb 18 19:33:06 crc kubenswrapper[4754]: W0218 19:33:06.612118 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod691e4b26_f211_4df5_814f_a43f30d686dd.slice/crio-8547a3d2da970ab2517015b28c2aa9ad4115e5e57065f32b8dce4cf098850158 WatchSource:0}: Error finding container 8547a3d2da970ab2517015b28c2aa9ad4115e5e57065f32b8dce4cf098850158: Status 404 returned error can't find the container with id 8547a3d2da970ab2517015b28c2aa9ad4115e5e57065f32b8dce4cf098850158 Feb 18 19:33:06 crc kubenswrapper[4754]: I0218 19:33:06.871476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k8s98" event={"ID":"691e4b26-f211-4df5-814f-a43f30d686dd","Type":"ContainerStarted","Data":"8547a3d2da970ab2517015b28c2aa9ad4115e5e57065f32b8dce4cf098850158"} Feb 18 19:33:08 crc kubenswrapper[4754]: I0218 19:33:08.097537 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:33:08 crc kubenswrapper[4754]: I0218 19:33:08.097625 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:33:09 crc kubenswrapper[4754]: I0218 19:33:09.475591 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:33:09 crc kubenswrapper[4754]: I0218 19:33:09.475931 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:33:09 crc kubenswrapper[4754]: I0218 19:33:09.531912 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:33:09 crc kubenswrapper[4754]: I0218 19:33:09.899750 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k8s98" event={"ID":"691e4b26-f211-4df5-814f-a43f30d686dd","Type":"ContainerStarted","Data":"7f2ef661b891e73ca5cf8eb610f4a9ed481a55348bee6a7990cd022639dd2e93"} Feb 18 19:33:09 crc kubenswrapper[4754]: I0218 19:33:09.924058 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k8s98" podStartSLOduration=1.882398145 podStartE2EDuration="4.924035972s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:06.616273113 +0000 UTC m=+889.066685909" lastFinishedPulling="2026-02-18 19:33:09.65791094 +0000 UTC m=+892.108323736" observedRunningTime="2026-02-18 19:33:09.920441311 +0000 UTC m=+892.370854107" watchObservedRunningTime="2026-02-18 19:33:09.924035972 +0000 UTC m=+892.374448768" Feb 18 19:33:09 crc kubenswrapper[4754]: I0218 19:33:09.958013 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:33:10 crc kubenswrapper[4754]: I0218 19:33:10.495608 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-llqrp" Feb 18 19:33:10 crc kubenswrapper[4754]: I0218 19:33:10.592258 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-2fxw5" Feb 18 19:33:13 crc kubenswrapper[4754]: I0218 19:33:13.332656 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fftwg"] Feb 18 19:33:13 crc kubenswrapper[4754]: I0218 19:33:13.333610 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fftwg" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="registry-server" containerID="cri-o://2e81b33d3bfd0ad830d7a27039aa8964b6bac0f54c92a2e927d43d82f17ed663" gracePeriod=2 Feb 18 19:33:13 crc kubenswrapper[4754]: I0218 19:33:13.934082 4754 generic.go:334] "Generic (PLEG): container finished" podID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerID="2e81b33d3bfd0ad830d7a27039aa8964b6bac0f54c92a2e927d43d82f17ed663" exitCode=0 Feb 18 19:33:13 crc kubenswrapper[4754]: I0218 19:33:13.934175 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fftwg" event={"ID":"328e7255-9eab-4197-9621-a3d2853a1f8a","Type":"ContainerDied","Data":"2e81b33d3bfd0ad830d7a27039aa8964b6bac0f54c92a2e927d43d82f17ed663"} Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.504626 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.549671 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xbxx\" (UniqueName: \"kubernetes.io/projected/328e7255-9eab-4197-9621-a3d2853a1f8a-kube-api-access-2xbxx\") pod \"328e7255-9eab-4197-9621-a3d2853a1f8a\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.549786 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-utilities\") pod \"328e7255-9eab-4197-9621-a3d2853a1f8a\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.549829 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-catalog-content\") pod \"328e7255-9eab-4197-9621-a3d2853a1f8a\" (UID: \"328e7255-9eab-4197-9621-a3d2853a1f8a\") " Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.551126 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-utilities" (OuterVolumeSpecName: "utilities") pod "328e7255-9eab-4197-9621-a3d2853a1f8a" (UID: "328e7255-9eab-4197-9621-a3d2853a1f8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.557115 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328e7255-9eab-4197-9621-a3d2853a1f8a-kube-api-access-2xbxx" (OuterVolumeSpecName: "kube-api-access-2xbxx") pod "328e7255-9eab-4197-9621-a3d2853a1f8a" (UID: "328e7255-9eab-4197-9621-a3d2853a1f8a"). InnerVolumeSpecName "kube-api-access-2xbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.600054 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328e7255-9eab-4197-9621-a3d2853a1f8a" (UID: "328e7255-9eab-4197-9621-a3d2853a1f8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.652236 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xbxx\" (UniqueName: \"kubernetes.io/projected/328e7255-9eab-4197-9621-a3d2853a1f8a-kube-api-access-2xbxx\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.652296 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.652312 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328e7255-9eab-4197-9621-a3d2853a1f8a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.946692 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fftwg" event={"ID":"328e7255-9eab-4197-9621-a3d2853a1f8a","Type":"ContainerDied","Data":"0ffce40822db18fdb7a009458123c399bb20de007ca91832699a01c8ce2d8626"} Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.946774 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fftwg" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.946791 4754 scope.go:117] "RemoveContainer" containerID="2e81b33d3bfd0ad830d7a27039aa8964b6bac0f54c92a2e927d43d82f17ed663" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.978276 4754 scope.go:117] "RemoveContainer" containerID="18690acbdbbcd36f4794f0a9c7d4f662f20a4698610ec3f1db4e29bf82cb8a53" Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.986788 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fftwg"] Feb 18 19:33:14 crc kubenswrapper[4754]: I0218 19:33:14.996591 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fftwg"] Feb 18 19:33:15 crc kubenswrapper[4754]: I0218 19:33:15.007378 4754 scope.go:117] "RemoveContainer" containerID="039b6f6939ec25e041f777bbcf694c9d0a251a90267ee5fc2585268b7152ccc3" Feb 18 19:33:16 crc kubenswrapper[4754]: I0218 19:33:16.093931 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:16 crc kubenswrapper[4754]: I0218 19:33:16.094466 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:16 crc kubenswrapper[4754]: I0218 19:33:16.133200 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:16 crc kubenswrapper[4754]: I0218 19:33:16.218134 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" path="/var/lib/kubelet/pods/328e7255-9eab-4197-9621-a3d2853a1f8a/volumes" Feb 18 19:33:17 crc kubenswrapper[4754]: I0218 19:33:17.002380 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-k8s98" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.377074 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq"] Feb 18 19:33:18 crc kubenswrapper[4754]: E0218 19:33:18.377390 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="extract-content" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.377402 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="extract-content" Feb 18 19:33:18 crc kubenswrapper[4754]: E0218 19:33:18.377472 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="registry-server" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.377480 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="registry-server" Feb 18 19:33:18 crc kubenswrapper[4754]: E0218 19:33:18.377507 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="extract-utilities" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.377513 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="extract-utilities" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.377629 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="328e7255-9eab-4197-9621-a3d2853a1f8a" containerName="registry-server" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.378545 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.381596 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mf4v9" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.407732 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq"] Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.416073 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-bundle\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.416131 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-util\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.416276 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvtbg\" (UniqueName: \"kubernetes.io/projected/797f22b8-a587-4b4b-9f68-cf833ee63548-kube-api-access-zvtbg\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.517999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvtbg\" (UniqueName: \"kubernetes.io/projected/797f22b8-a587-4b4b-9f68-cf833ee63548-kube-api-access-zvtbg\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.518516 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-bundle\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.518550 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-util\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.519276 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-util\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.519299 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-bundle\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.544381 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvtbg\" (UniqueName: \"kubernetes.io/projected/797f22b8-a587-4b4b-9f68-cf833ee63548-kube-api-access-zvtbg\") pod \"85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:18 crc kubenswrapper[4754]: I0218 19:33:18.710015 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:19 crc kubenswrapper[4754]: I0218 19:33:19.174701 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq"] Feb 18 19:33:20 crc kubenswrapper[4754]: I0218 19:33:20.004206 4754 generic.go:334] "Generic (PLEG): container finished" podID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerID="d10074f5f7b8597c0029f3e7f00552df6007c9d902c48845897adcb8e2b14a55" exitCode=0 Feb 18 19:33:20 crc kubenswrapper[4754]: I0218 19:33:20.004273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" event={"ID":"797f22b8-a587-4b4b-9f68-cf833ee63548","Type":"ContainerDied","Data":"d10074f5f7b8597c0029f3e7f00552df6007c9d902c48845897adcb8e2b14a55"} Feb 18 19:33:20 crc kubenswrapper[4754]: I0218 19:33:20.004679 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" event={"ID":"797f22b8-a587-4b4b-9f68-cf833ee63548","Type":"ContainerStarted","Data":"98db39a16430d758d2a9743bbd34eda08863b0ad59bc88f41b2fa123b435b3c8"} Feb 18 19:33:20 crc kubenswrapper[4754]: I0218 19:33:20.488744 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-g26mw" Feb 18 19:33:21 crc kubenswrapper[4754]: I0218 19:33:21.015268 4754 generic.go:334] "Generic (PLEG): container finished" podID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerID="d653f0216c47dc8cc1a8d530dd1e102a782b14017827fd930e6c5365be4cfa7d" exitCode=0 Feb 18 19:33:21 crc kubenswrapper[4754]: I0218 19:33:21.015365 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" event={"ID":"797f22b8-a587-4b4b-9f68-cf833ee63548","Type":"ContainerDied","Data":"d653f0216c47dc8cc1a8d530dd1e102a782b14017827fd930e6c5365be4cfa7d"} Feb 18 19:33:22 crc kubenswrapper[4754]: I0218 19:33:22.038290 4754 generic.go:334] "Generic (PLEG): container finished" podID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerID="94e53e357340a538f713121d996f3eb9a958ba7186df3995d70475c02c72d0d5" exitCode=0 Feb 18 19:33:22 crc kubenswrapper[4754]: I0218 19:33:22.038387 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" event={"ID":"797f22b8-a587-4b4b-9f68-cf833ee63548","Type":"ContainerDied","Data":"94e53e357340a538f713121d996f3eb9a958ba7186df3995d70475c02c72d0d5"} Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.405006 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.510481 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-bundle\") pod \"797f22b8-a587-4b4b-9f68-cf833ee63548\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.510685 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-util\") pod \"797f22b8-a587-4b4b-9f68-cf833ee63548\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.511055 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvtbg\" (UniqueName: \"kubernetes.io/projected/797f22b8-a587-4b4b-9f68-cf833ee63548-kube-api-access-zvtbg\") pod \"797f22b8-a587-4b4b-9f68-cf833ee63548\" (UID: \"797f22b8-a587-4b4b-9f68-cf833ee63548\") " Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.513134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-bundle" (OuterVolumeSpecName: "bundle") pod "797f22b8-a587-4b4b-9f68-cf833ee63548" (UID: "797f22b8-a587-4b4b-9f68-cf833ee63548"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.519594 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797f22b8-a587-4b4b-9f68-cf833ee63548-kube-api-access-zvtbg" (OuterVolumeSpecName: "kube-api-access-zvtbg") pod "797f22b8-a587-4b4b-9f68-cf833ee63548" (UID: "797f22b8-a587-4b4b-9f68-cf833ee63548"). InnerVolumeSpecName "kube-api-access-zvtbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.526029 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-util" (OuterVolumeSpecName: "util") pod "797f22b8-a587-4b4b-9f68-cf833ee63548" (UID: "797f22b8-a587-4b4b-9f68-cf833ee63548"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.612887 4754 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.612925 4754 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/797f22b8-a587-4b4b-9f68-cf833ee63548-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:23 crc kubenswrapper[4754]: I0218 19:33:23.612939 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvtbg\" (UniqueName: \"kubernetes.io/projected/797f22b8-a587-4b4b-9f68-cf833ee63548-kube-api-access-zvtbg\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:24 crc kubenswrapper[4754]: I0218 19:33:24.064023 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" event={"ID":"797f22b8-a587-4b4b-9f68-cf833ee63548","Type":"ContainerDied","Data":"98db39a16430d758d2a9743bbd34eda08863b0ad59bc88f41b2fa123b435b3c8"} Feb 18 19:33:24 crc kubenswrapper[4754]: I0218 19:33:24.064105 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98db39a16430d758d2a9743bbd34eda08863b0ad59bc88f41b2fa123b435b3c8" Feb 18 19:33:24 crc kubenswrapper[4754]: I0218 19:33:24.064174 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/85a4c68217f610993c8d8e9cb5515d34cd29bc0a61dffa1ef680e6b52cj4pmq" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.917337 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb"] Feb 18 19:33:26 crc kubenswrapper[4754]: E0218 19:33:26.918878 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="pull" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.918987 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="pull" Feb 18 19:33:26 crc kubenswrapper[4754]: E0218 19:33:26.919059 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="util" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.919112 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="util" Feb 18 19:33:26 crc kubenswrapper[4754]: E0218 19:33:26.919182 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="extract" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.919233 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="extract" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.919398 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="797f22b8-a587-4b4b-9f68-cf833ee63548" containerName="extract" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.921033 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.925365 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6f55x" Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.959952 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb"] Feb 18 19:33:26 crc kubenswrapper[4754]: I0218 19:33:26.969078 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbwp\" (UniqueName: \"kubernetes.io/projected/4c68e766-2ab6-4050-84c7-a0cf533ccd55-kube-api-access-rnbwp\") pod \"openstack-operator-controller-init-8d54dbc5b-krctb\" (UID: \"4c68e766-2ab6-4050-84c7-a0cf533ccd55\") " pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:27 crc kubenswrapper[4754]: I0218 19:33:27.070735 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnbwp\" (UniqueName: \"kubernetes.io/projected/4c68e766-2ab6-4050-84c7-a0cf533ccd55-kube-api-access-rnbwp\") pod \"openstack-operator-controller-init-8d54dbc5b-krctb\" (UID: \"4c68e766-2ab6-4050-84c7-a0cf533ccd55\") " pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:27 crc kubenswrapper[4754]: I0218 19:33:27.109381 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnbwp\" (UniqueName: \"kubernetes.io/projected/4c68e766-2ab6-4050-84c7-a0cf533ccd55-kube-api-access-rnbwp\") pod \"openstack-operator-controller-init-8d54dbc5b-krctb\" (UID: \"4c68e766-2ab6-4050-84c7-a0cf533ccd55\") " pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:27 crc kubenswrapper[4754]: I0218 19:33:27.247637 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:27 crc kubenswrapper[4754]: I0218 19:33:27.771492 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb"] Feb 18 19:33:28 crc kubenswrapper[4754]: I0218 19:33:28.097189 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" event={"ID":"4c68e766-2ab6-4050-84c7-a0cf533ccd55","Type":"ContainerStarted","Data":"c173357b392638a9b54b09162e7949f3e2f3a29d45ade05be274cca1bab77e78"} Feb 18 19:33:34 crc kubenswrapper[4754]: I0218 19:33:34.152968 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" event={"ID":"4c68e766-2ab6-4050-84c7-a0cf533ccd55","Type":"ContainerStarted","Data":"74845dfa6edeb9524834bc65e2a7d434d127a8b71a97fd5f4c7778fe506fc7dc"} Feb 18 19:33:34 crc kubenswrapper[4754]: I0218 19:33:34.154690 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:34 crc kubenswrapper[4754]: I0218 19:33:34.195072 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" podStartSLOduration=2.6001309089999998 podStartE2EDuration="8.195044101s" podCreationTimestamp="2026-02-18 19:33:26 +0000 UTC" firstStartedPulling="2026-02-18 19:33:27.766452861 +0000 UTC m=+910.216865657" lastFinishedPulling="2026-02-18 19:33:33.361366013 +0000 UTC m=+915.811778849" observedRunningTime="2026-02-18 19:33:34.184735973 +0000 UTC m=+916.635148799" watchObservedRunningTime="2026-02-18 19:33:34.195044101 +0000 UTC m=+916.645456927" Feb 18 19:33:35 crc kubenswrapper[4754]: I0218 19:33:35.935972 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xllnk"] Feb 18 19:33:35 crc kubenswrapper[4754]: I0218 19:33:35.937850 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:35.958441 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xllnk"] Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.049762 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-utilities\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.049855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-catalog-content\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.049928 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtq66\" (UniqueName: \"kubernetes.io/projected/6e424238-2fca-49c6-ab62-981792d4fd27-kube-api-access-xtq66\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.151246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-utilities\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.151340 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-catalog-content\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.151430 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtq66\" (UniqueName: \"kubernetes.io/projected/6e424238-2fca-49c6-ab62-981792d4fd27-kube-api-access-xtq66\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.152027 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-utilities\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.152231 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-catalog-content\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.173253 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtq66\" (UniqueName: \"kubernetes.io/projected/6e424238-2fca-49c6-ab62-981792d4fd27-kube-api-access-xtq66\") pod \"redhat-marketplace-xllnk\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.358819 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:36 crc kubenswrapper[4754]: I0218 19:33:36.603557 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xllnk"] Feb 18 19:33:37 crc kubenswrapper[4754]: I0218 19:33:37.177200 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xllnk" event={"ID":"6e424238-2fca-49c6-ab62-981792d4fd27","Type":"ContainerStarted","Data":"011e8e702ff42f54aae10d4500d671355d0950346a650d70de08ddca6149d4f3"} Feb 18 19:33:38 crc kubenswrapper[4754]: I0218 19:33:38.096466 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:33:38 crc kubenswrapper[4754]: I0218 19:33:38.096567 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:33:38 crc kubenswrapper[4754]: I0218 19:33:38.187375 4754 generic.go:334] "Generic (PLEG): container finished" podID="6e424238-2fca-49c6-ab62-981792d4fd27" containerID="7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287" exitCode=0 Feb 18 19:33:38 crc kubenswrapper[4754]: I0218 19:33:38.187454 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xllnk" event={"ID":"6e424238-2fca-49c6-ab62-981792d4fd27","Type":"ContainerDied","Data":"7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287"} Feb 18 19:33:39 crc kubenswrapper[4754]: I0218 19:33:39.198037 4754 generic.go:334] "Generic (PLEG): container finished" podID="6e424238-2fca-49c6-ab62-981792d4fd27" containerID="c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350" exitCode=0 Feb 18 19:33:39 crc kubenswrapper[4754]: I0218 19:33:39.198165 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xllnk" event={"ID":"6e424238-2fca-49c6-ab62-981792d4fd27","Type":"ContainerDied","Data":"c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350"} Feb 18 19:33:40 crc kubenswrapper[4754]: I0218 19:33:40.222617 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xllnk" event={"ID":"6e424238-2fca-49c6-ab62-981792d4fd27","Type":"ContainerStarted","Data":"99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52"} Feb 18 19:33:40 crc kubenswrapper[4754]: I0218 19:33:40.245342 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xllnk" podStartSLOduration=3.768955058 podStartE2EDuration="5.245315096s" podCreationTimestamp="2026-02-18 19:33:35 +0000 UTC" firstStartedPulling="2026-02-18 19:33:38.188994776 +0000 UTC m=+920.639407572" lastFinishedPulling="2026-02-18 19:33:39.665354784 +0000 UTC m=+922.115767610" observedRunningTime="2026-02-18 19:33:40.23733606 +0000 UTC m=+922.687748896" watchObservedRunningTime="2026-02-18 19:33:40.245315096 +0000 UTC m=+922.695727912" Feb 18 19:33:46 crc kubenswrapper[4754]: I0218 19:33:46.359984 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:46 crc kubenswrapper[4754]: I0218 19:33:46.360805 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:46 crc kubenswrapper[4754]: I0218 19:33:46.414360 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:47 crc kubenswrapper[4754]: I0218 19:33:47.250125 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-8d54dbc5b-krctb" Feb 18 19:33:47 crc kubenswrapper[4754]: I0218 19:33:47.339506 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:48 crc kubenswrapper[4754]: I0218 19:33:48.719220 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xllnk"] Feb 18 19:33:49 crc kubenswrapper[4754]: I0218 19:33:49.275769 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xllnk" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="registry-server" containerID="cri-o://99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52" gracePeriod=2 Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.183605 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.286054 4754 generic.go:334] "Generic (PLEG): container finished" podID="6e424238-2fca-49c6-ab62-981792d4fd27" containerID="99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52" exitCode=0 Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.286122 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xllnk" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.286134 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xllnk" event={"ID":"6e424238-2fca-49c6-ab62-981792d4fd27","Type":"ContainerDied","Data":"99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52"} Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.286361 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xllnk" event={"ID":"6e424238-2fca-49c6-ab62-981792d4fd27","Type":"ContainerDied","Data":"011e8e702ff42f54aae10d4500d671355d0950346a650d70de08ddca6149d4f3"} Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.286423 4754 scope.go:117] "RemoveContainer" containerID="99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.308337 4754 scope.go:117] "RemoveContainer" containerID="c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.328360 4754 scope.go:117] "RemoveContainer" containerID="7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.360601 4754 scope.go:117] "RemoveContainer" containerID="99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52" Feb 18 19:33:50 crc kubenswrapper[4754]: E0218 19:33:50.362708 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52\": container with ID starting with 99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52 not found: ID does not exist" containerID="99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.362759 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52"} err="failed to get container status \"99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52\": rpc error: code = NotFound desc = could not find container \"99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52\": container with ID starting with 99f165c2ee8b630b39dc31b9aad410fc2fc9b8c7b9720e8fcccd6614751c7d52 not found: ID does not exist" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.362793 4754 scope.go:117] "RemoveContainer" containerID="c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350" Feb 18 19:33:50 crc kubenswrapper[4754]: E0218 19:33:50.363186 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350\": container with ID starting with c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350 not found: ID does not exist" containerID="c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.363266 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350"} err="failed to get container status \"c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350\": rpc error: code = NotFound desc = could not find container \"c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350\": container with ID starting with c4d61e4f7cff906b6bef6047bead9532ec22851970c51d70124966c73b4a6350 not found: ID does not exist" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.363328 4754 scope.go:117] "RemoveContainer" containerID="7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287" Feb 18 19:33:50 crc kubenswrapper[4754]: E0218 19:33:50.363677 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287\": container with ID starting with 7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287 not found: ID does not exist" containerID="7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.363722 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287"} err="failed to get container status \"7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287\": rpc error: code = NotFound desc = could not find container \"7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287\": container with ID starting with 7b60b16e18aaaacc3305b279aaad962b7ae0f13c627ffa15c5cfa853fa05c287 not found: ID does not exist" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.379261 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtq66\" (UniqueName: \"kubernetes.io/projected/6e424238-2fca-49c6-ab62-981792d4fd27-kube-api-access-xtq66\") pod \"6e424238-2fca-49c6-ab62-981792d4fd27\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.379310 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-catalog-content\") pod \"6e424238-2fca-49c6-ab62-981792d4fd27\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.379491 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-utilities\") pod \"6e424238-2fca-49c6-ab62-981792d4fd27\" (UID: \"6e424238-2fca-49c6-ab62-981792d4fd27\") " Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.380617 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-utilities" (OuterVolumeSpecName: "utilities") pod "6e424238-2fca-49c6-ab62-981792d4fd27" (UID: "6e424238-2fca-49c6-ab62-981792d4fd27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.388166 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e424238-2fca-49c6-ab62-981792d4fd27-kube-api-access-xtq66" (OuterVolumeSpecName: "kube-api-access-xtq66") pod "6e424238-2fca-49c6-ab62-981792d4fd27" (UID: "6e424238-2fca-49c6-ab62-981792d4fd27"). InnerVolumeSpecName "kube-api-access-xtq66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.417375 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e424238-2fca-49c6-ab62-981792d4fd27" (UID: "6e424238-2fca-49c6-ab62-981792d4fd27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.481669 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.481704 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtq66\" (UniqueName: \"kubernetes.io/projected/6e424238-2fca-49c6-ab62-981792d4fd27-kube-api-access-xtq66\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.481721 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e424238-2fca-49c6-ab62-981792d4fd27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.614754 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xllnk"] Feb 18 19:33:50 crc kubenswrapper[4754]: I0218 19:33:50.620657 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xllnk"] Feb 18 19:33:50 crc kubenswrapper[4754]: E0218 19:33:50.650982 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e424238_2fca_49c6_ab62_981792d4fd27.slice/crio-011e8e702ff42f54aae10d4500d671355d0950346a650d70de08ddca6149d4f3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e424238_2fca_49c6_ab62_981792d4fd27.slice\": RecentStats: unable to find data in memory cache]" Feb 18 19:33:52 crc kubenswrapper[4754]: I0218 19:33:52.218494 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" path="/var/lib/kubelet/pods/6e424238-2fca-49c6-ab62-981792d4fd27/volumes" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.521747 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp"] Feb 18 19:34:06 crc kubenswrapper[4754]: E0218 19:34:06.522828 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="registry-server" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.522844 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="registry-server" Feb 18 19:34:06 crc kubenswrapper[4754]: E0218 19:34:06.522874 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="extract-content" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.522880 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="extract-content" Feb 18 19:34:06 crc kubenswrapper[4754]: E0218 19:34:06.522896 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="extract-utilities" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.522903 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="extract-utilities" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.523055 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e424238-2fca-49c6-ab62-981792d4fd27" containerName="registry-server" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.523671 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.526007 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-m2nkv" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.543602 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.566897 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.574711 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.590737 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5k92h" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.597105 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wzs\" (UniqueName: \"kubernetes.io/projected/72d3d891-2e01-4f49-bb96-45089a1fb702-kube-api-access-b2wzs\") pod \"designate-operator-controller-manager-6d8bf5c495-s85zg\" (UID: \"72d3d891-2e01-4f49-bb96-45089a1fb702\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.597182 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6q87\" (UniqueName: \"kubernetes.io/projected/01f6e565-c160-40a7-8456-921ecb9980bf-kube-api-access-m6q87\") pod \"barbican-operator-controller-manager-868647ff47-pqkzp\" (UID: \"01f6e565-c160-40a7-8456-921ecb9980bf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.617597 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.626489 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.637878 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jtttm" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.662311 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.674315 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.696028 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.697312 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.699014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/298ec2c6-0d06-413a-a0c5-d381423fb11b-kube-api-access-9nnp7\") pod \"cinder-operator-controller-manager-5d946d989d-btfr9\" (UID: \"298ec2c6-0d06-413a-a0c5-d381423fb11b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.699118 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2wzs\" (UniqueName: \"kubernetes.io/projected/72d3d891-2e01-4f49-bb96-45089a1fb702-kube-api-access-b2wzs\") pod \"designate-operator-controller-manager-6d8bf5c495-s85zg\" (UID: \"72d3d891-2e01-4f49-bb96-45089a1fb702\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.699166 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6q87\" (UniqueName: \"kubernetes.io/projected/01f6e565-c160-40a7-8456-921ecb9980bf-kube-api-access-m6q87\") pod \"barbican-operator-controller-manager-868647ff47-pqkzp\" (UID: \"01f6e565-c160-40a7-8456-921ecb9980bf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.702764 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4wpvq" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.718152 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.735707 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.740657 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2wzs\" (UniqueName: \"kubernetes.io/projected/72d3d891-2e01-4f49-bb96-45089a1fb702-kube-api-access-b2wzs\") pod \"designate-operator-controller-manager-6d8bf5c495-s85zg\" (UID: \"72d3d891-2e01-4f49-bb96-45089a1fb702\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.746307 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6q87\" (UniqueName: \"kubernetes.io/projected/01f6e565-c160-40a7-8456-921ecb9980bf-kube-api-access-m6q87\") pod \"barbican-operator-controller-manager-868647ff47-pqkzp\" (UID: \"01f6e565-c160-40a7-8456-921ecb9980bf\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.763769 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.764061 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.773400 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.774557 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.779219 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bbk55"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.780431 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.782716 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t6vmk" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.783628 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.793191 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.796990 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6mlcg" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.797402 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cqwhc" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.797705 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.798861 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.801839 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bbk55"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.802243 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/298ec2c6-0d06-413a-a0c5-d381423fb11b-kube-api-access-9nnp7\") pod \"cinder-operator-controller-manager-5d946d989d-btfr9\" (UID: \"298ec2c6-0d06-413a-a0c5-d381423fb11b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.802295 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s65p\" (UniqueName: \"kubernetes.io/projected/f9f82a7f-36d1-4ff0-9053-a688d0691148-kube-api-access-7s65p\") pod \"horizon-operator-controller-manager-5b9b8895d5-c56gw\" (UID: \"f9f82a7f-36d1-4ff0-9053-a688d0691148\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.810536 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-q42v2" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.810696 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.810728 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.812229 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.817583 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hkrnt" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.819566 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.820346 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.824222 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.829222 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m46k9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.849868 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/298ec2c6-0d06-413a-a0c5-d381423fb11b-kube-api-access-9nnp7\") pod \"cinder-operator-controller-manager-5d946d989d-btfr9\" (UID: \"298ec2c6-0d06-413a-a0c5-d381423fb11b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.850611 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.866424 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.868198 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.869426 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.874678 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9gfk2" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.884191 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.904852 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpjx\" (UniqueName: \"kubernetes.io/projected/b9027f53-0411-4ada-9f8d-31d952c5039e-kube-api-access-5bpjx\") pod \"glance-operator-controller-manager-77987464f4-kpj7s\" (UID: \"b9027f53-0411-4ada-9f8d-31d952c5039e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.904907 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjlx\" (UniqueName: \"kubernetes.io/projected/9e40ef7c-1af7-4981-a2a4-bcc98c82fc58-kube-api-access-ksjlx\") pod \"ironic-operator-controller-manager-554564d7fc-4h67j\" (UID: \"9e40ef7c-1af7-4981-a2a4-bcc98c82fc58\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.904988 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s65p\" (UniqueName: \"kubernetes.io/projected/f9f82a7f-36d1-4ff0-9053-a688d0691148-kube-api-access-7s65p\") pod \"horizon-operator-controller-manager-5b9b8895d5-c56gw\" (UID: \"f9f82a7f-36d1-4ff0-9053-a688d0691148\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.906270 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxp2b\" (UniqueName: \"kubernetes.io/projected/ff462b12-ae60-4603-a1f2-07eb886af80f-kube-api-access-bxp2b\") pod \"heat-operator-controller-manager-69f49c598c-hsrft\" (UID: \"ff462b12-ae60-4603-a1f2-07eb886af80f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.906309 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpqz\" (UniqueName: \"kubernetes.io/projected/3d21ba73-d3ac-4256-8fa8-451908a2e585-kube-api-access-vkpqz\") pod \"manila-operator-controller-manager-54f6768c69-gvj77\" (UID: \"3d21ba73-d3ac-4256-8fa8-451908a2e585\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.906341 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kbt8\" (UniqueName: \"kubernetes.io/projected/9fb89866-aaf8-4478-ae67-7901a682f0e2-kube-api-access-8kbt8\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.906366 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.906398 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42smc\" (UniqueName: \"kubernetes.io/projected/e870610a-7a60-4273-bc6f-5512fc6570c2-kube-api-access-42smc\") pod \"keystone-operator-controller-manager-b4d948c87-jgbhw\" (UID: \"e870610a-7a60-4273-bc6f-5512fc6570c2\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.906819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xjq\" (UniqueName: \"kubernetes.io/projected/c908b6f1-79ef-4afb-8a99-9bee5b842a47-kube-api-access-q6xjq\") pod \"mariadb-operator-controller-manager-6994f66f48-54jtq\" (UID: \"c908b6f1-79ef-4afb-8a99-9bee5b842a47\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.922399 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7287z"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.923628 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.923630 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.930317 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.931645 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.941016 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-425hs" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.953570 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.962133 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ljm29" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.965211 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7287z"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.983862 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s65p\" (UniqueName: \"kubernetes.io/projected/f9f82a7f-36d1-4ff0-9053-a688d0691148-kube-api-access-7s65p\") pod \"horizon-operator-controller-manager-5b9b8895d5-c56gw\" (UID: \"f9f82a7f-36d1-4ff0-9053-a688d0691148\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.994405 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9"] Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.995607 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:06 crc kubenswrapper[4754]: I0218 19:34:06.998588 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018547 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxp2b\" (UniqueName: \"kubernetes.io/projected/ff462b12-ae60-4603-a1f2-07eb886af80f-kube-api-access-bxp2b\") pod \"heat-operator-controller-manager-69f49c598c-hsrft\" (UID: \"ff462b12-ae60-4603-a1f2-07eb886af80f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018593 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gclfd" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018612 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpqz\" (UniqueName: \"kubernetes.io/projected/3d21ba73-d3ac-4256-8fa8-451908a2e585-kube-api-access-vkpqz\") pod \"manila-operator-controller-manager-54f6768c69-gvj77\" (UID: \"3d21ba73-d3ac-4256-8fa8-451908a2e585\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018658 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kbt8\" (UniqueName: \"kubernetes.io/projected/9fb89866-aaf8-4478-ae67-7901a682f0e2-kube-api-access-8kbt8\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018690 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018737 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfs8\" (UniqueName: \"kubernetes.io/projected/01e7cd39-1236-497c-a0f5-916631fde3ee-kube-api-access-2qfs8\") pod \"neutron-operator-controller-manager-64ddbf8bb-fm9ds\" (UID: \"01e7cd39-1236-497c-a0f5-916631fde3ee\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42smc\" (UniqueName: \"kubernetes.io/projected/e870610a-7a60-4273-bc6f-5512fc6570c2-kube-api-access-42smc\") pod \"keystone-operator-controller-manager-b4d948c87-jgbhw\" (UID: \"e870610a-7a60-4273-bc6f-5512fc6570c2\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018853 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xjq\" (UniqueName: \"kubernetes.io/projected/c908b6f1-79ef-4afb-8a99-9bee5b842a47-kube-api-access-q6xjq\") pod \"mariadb-operator-controller-manager-6994f66f48-54jtq\" (UID: \"c908b6f1-79ef-4afb-8a99-9bee5b842a47\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018877 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54qsc\" (UniqueName: \"kubernetes.io/projected/8dd7404b-411a-47f3-92f1-a94b2cfade39-kube-api-access-54qsc\") pod \"octavia-operator-controller-manager-69f8888797-nkmr9\" (UID: \"8dd7404b-411a-47f3-92f1-a94b2cfade39\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018921 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpjx\" (UniqueName: \"kubernetes.io/projected/b9027f53-0411-4ada-9f8d-31d952c5039e-kube-api-access-5bpjx\") pod \"glance-operator-controller-manager-77987464f4-kpj7s\" (UID: \"b9027f53-0411-4ada-9f8d-31d952c5039e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjlx\" (UniqueName: \"kubernetes.io/projected/9e40ef7c-1af7-4981-a2a4-bcc98c82fc58-kube-api-access-ksjlx\") pod \"ironic-operator-controller-manager-554564d7fc-4h67j\" (UID: \"9e40ef7c-1af7-4981-a2a4-bcc98c82fc58\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.018995 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7tx\" (UniqueName: \"kubernetes.io/projected/94e4720b-a822-4e0b-adb2-3dbcab23d98c-kube-api-access-fm7tx\") pod \"nova-operator-controller-manager-567668f5cf-7287z\" (UID: \"94e4720b-a822-4e0b-adb2-3dbcab23d98c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.019805 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9"] Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.021197 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.021269 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert podName:9fb89866-aaf8-4478-ae67-7901a682f0e2 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:07.521246663 +0000 UTC m=+949.971659459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert") pod "infra-operator-controller-manager-79d975b745-bbk55" (UID: "9fb89866-aaf8-4478-ae67-7901a682f0e2") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.030450 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.031944 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.052204 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.053310 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.133796 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.139237 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfs8\" (UniqueName: \"kubernetes.io/projected/01e7cd39-1236-497c-a0f5-916631fde3ee-kube-api-access-2qfs8\") pod \"neutron-operator-controller-manager-64ddbf8bb-fm9ds\" (UID: \"01e7cd39-1236-497c-a0f5-916631fde3ee\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.139340 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82798\" (UniqueName: \"kubernetes.io/projected/6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2-kube-api-access-82798\") pod \"ovn-operator-controller-manager-d44cf6b75-prm45\" (UID: \"6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.139366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54qsc\" (UniqueName: \"kubernetes.io/projected/8dd7404b-411a-47f3-92f1-a94b2cfade39-kube-api-access-54qsc\") pod \"octavia-operator-controller-manager-69f8888797-nkmr9\" (UID: \"8dd7404b-411a-47f3-92f1-a94b2cfade39\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.139428 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjd6v\" (UniqueName: \"kubernetes.io/projected/400479c2-96d2-4645-abf3-a03726e86cfb-kube-api-access-wjd6v\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.139469 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7tx\" (UniqueName: \"kubernetes.io/projected/94e4720b-a822-4e0b-adb2-3dbcab23d98c-kube-api-access-fm7tx\") pod \"nova-operator-controller-manager-567668f5cf-7287z\" (UID: \"94e4720b-a822-4e0b-adb2-3dbcab23d98c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.139491 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.141582 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pj7wx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.141797 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-62zmh" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.142330 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.148755 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.160246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxp2b\" (UniqueName: \"kubernetes.io/projected/ff462b12-ae60-4603-a1f2-07eb886af80f-kube-api-access-bxp2b\") pod \"heat-operator-controller-manager-69f49c598c-hsrft\" (UID: \"ff462b12-ae60-4603-a1f2-07eb886af80f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.165445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpjx\" (UniqueName: \"kubernetes.io/projected/b9027f53-0411-4ada-9f8d-31d952c5039e-kube-api-access-5bpjx\") pod \"glance-operator-controller-manager-77987464f4-kpj7s\" (UID: \"b9027f53-0411-4ada-9f8d-31d952c5039e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.248029 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kbt8\" (UniqueName: \"kubernetes.io/projected/9fb89866-aaf8-4478-ae67-7901a682f0e2-kube-api-access-8kbt8\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.249720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjd6v\" (UniqueName: \"kubernetes.io/projected/400479c2-96d2-4645-abf3-a03726e86cfb-kube-api-access-wjd6v\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.249856 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.250002 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82798\" (UniqueName: \"kubernetes.io/projected/6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2-kube-api-access-82798\") pod \"ovn-operator-controller-manager-d44cf6b75-prm45\" (UID: \"6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.252974 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpqz\" (UniqueName: \"kubernetes.io/projected/3d21ba73-d3ac-4256-8fa8-451908a2e585-kube-api-access-vkpqz\") pod \"manila-operator-controller-manager-54f6768c69-gvj77\" (UID: \"3d21ba73-d3ac-4256-8fa8-451908a2e585\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.253531 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54qsc\" (UniqueName: \"kubernetes.io/projected/8dd7404b-411a-47f3-92f1-a94b2cfade39-kube-api-access-54qsc\") pod \"octavia-operator-controller-manager-69f8888797-nkmr9\" (UID: \"8dd7404b-411a-47f3-92f1-a94b2cfade39\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.253815 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjlx\" (UniqueName: \"kubernetes.io/projected/9e40ef7c-1af7-4981-a2a4-bcc98c82fc58-kube-api-access-ksjlx\") pod \"ironic-operator-controller-manager-554564d7fc-4h67j\" (UID: \"9e40ef7c-1af7-4981-a2a4-bcc98c82fc58\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.254362 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xjq\" (UniqueName: \"kubernetes.io/projected/c908b6f1-79ef-4afb-8a99-9bee5b842a47-kube-api-access-q6xjq\") pod \"mariadb-operator-controller-manager-6994f66f48-54jtq\" (UID: \"c908b6f1-79ef-4afb-8a99-9bee5b842a47\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.262093 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.262215 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert podName:400479c2-96d2-4645-abf3-a03726e86cfb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:07.762184134 +0000 UTC m=+950.212596930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" (UID: "400479c2-96d2-4645-abf3-a03726e86cfb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.269204 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42smc\" (UniqueName: \"kubernetes.io/projected/e870610a-7a60-4273-bc6f-5512fc6570c2-kube-api-access-42smc\") pod \"keystone-operator-controller-manager-b4d948c87-jgbhw\" (UID: \"e870610a-7a60-4273-bc6f-5512fc6570c2\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.279841 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7tx\" (UniqueName: \"kubernetes.io/projected/94e4720b-a822-4e0b-adb2-3dbcab23d98c-kube-api-access-fm7tx\") pod \"nova-operator-controller-manager-567668f5cf-7287z\" (UID: \"94e4720b-a822-4e0b-adb2-3dbcab23d98c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.282502 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82798\" (UniqueName: \"kubernetes.io/projected/6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2-kube-api-access-82798\") pod \"ovn-operator-controller-manager-d44cf6b75-prm45\" (UID: \"6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.285311 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.307985 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfs8\" (UniqueName: \"kubernetes.io/projected/01e7cd39-1236-497c-a0f5-916631fde3ee-kube-api-access-2qfs8\") pod \"neutron-operator-controller-manager-64ddbf8bb-fm9ds\" (UID: \"01e7cd39-1236-497c-a0f5-916631fde3ee\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.317485 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.318857 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.336814 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.346776 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.357788 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjd6v\" (UniqueName: \"kubernetes.io/projected/400479c2-96d2-4645-abf3-a03726e86cfb-kube-api-access-wjd6v\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.372832 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q92wm"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.374254 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.407038 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lrfbd" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.407451 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.413650 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.414684 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.420437 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.421345 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-p9lgw" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.452196 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-wn272"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.453246 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.459855 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-k9xwl" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.464756 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.468098 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-wn272"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.476388 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284vm\" (UniqueName: \"kubernetes.io/projected/a401ec6a-978c-42d0-9cba-af38aebd03d2-kube-api-access-284vm\") pod \"telemetry-operator-controller-manager-7f45b4ff68-g7dh6\" (UID: \"a401ec6a-978c-42d0-9cba-af38aebd03d2\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.476487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpjnk\" (UniqueName: \"kubernetes.io/projected/9fdc5e67-9299-4d07-b2b5-b8c766e68469-kube-api-access-zpjnk\") pod \"placement-operator-controller-manager-8497b45c89-wn272\" (UID: \"9fdc5e67-9299-4d07-b2b5-b8c766e68469\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.476520 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7pp\" (UniqueName: \"kubernetes.io/projected/af3235d8-efd9-4edb-bd48-5f7ec1e41524-kube-api-access-lq7pp\") pod \"swift-operator-controller-manager-68f46476f-q92wm\" (UID: \"af3235d8-efd9-4edb-bd48-5f7ec1e41524\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.477573 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.477950 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.488760 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q92wm"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.493077 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.510743 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-nlwpx"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.511741 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.526223 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-nlwpx"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.533512 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6hxzg" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.548553 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.579480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-284vm\" (UniqueName: \"kubernetes.io/projected/a401ec6a-978c-42d0-9cba-af38aebd03d2-kube-api-access-284vm\") pod \"telemetry-operator-controller-manager-7f45b4ff68-g7dh6\" (UID: \"a401ec6a-978c-42d0-9cba-af38aebd03d2\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.580091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.580162 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpjnk\" (UniqueName: \"kubernetes.io/projected/9fdc5e67-9299-4d07-b2b5-b8c766e68469-kube-api-access-zpjnk\") pod \"placement-operator-controller-manager-8497b45c89-wn272\" (UID: \"9fdc5e67-9299-4d07-b2b5-b8c766e68469\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.580200 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7pp\" (UniqueName: \"kubernetes.io/projected/af3235d8-efd9-4edb-bd48-5f7ec1e41524-kube-api-access-lq7pp\") pod \"swift-operator-controller-manager-68f46476f-q92wm\" (UID: \"af3235d8-efd9-4edb-bd48-5f7ec1e41524\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.591276 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.591381 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert podName:9fb89866-aaf8-4478-ae67-7901a682f0e2 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:08.591355432 +0000 UTC m=+951.041768218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert") pod "infra-operator-controller-manager-79d975b745-bbk55" (UID: "9fb89866-aaf8-4478-ae67-7901a682f0e2") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.601454 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.602946 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.610596 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fvn74" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.623587 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.632620 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpjnk\" (UniqueName: \"kubernetes.io/projected/9fdc5e67-9299-4d07-b2b5-b8c766e68469-kube-api-access-zpjnk\") pod \"placement-operator-controller-manager-8497b45c89-wn272\" (UID: \"9fdc5e67-9299-4d07-b2b5-b8c766e68469\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.652068 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7pp\" (UniqueName: \"kubernetes.io/projected/af3235d8-efd9-4edb-bd48-5f7ec1e41524-kube-api-access-lq7pp\") pod \"swift-operator-controller-manager-68f46476f-q92wm\" (UID: \"af3235d8-efd9-4edb-bd48-5f7ec1e41524\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.652990 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-284vm\" (UniqueName: \"kubernetes.io/projected/a401ec6a-978c-42d0-9cba-af38aebd03d2-kube-api-access-284vm\") pod \"telemetry-operator-controller-manager-7f45b4ff68-g7dh6\" (UID: \"a401ec6a-978c-42d0-9cba-af38aebd03d2\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.685080 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n962j\" (UniqueName: \"kubernetes.io/projected/460668cc-09d6-4ea3-a100-5fe7cd58e9f5-kube-api-access-n962j\") pod \"test-operator-controller-manager-7866795846-nlwpx\" (UID: \"460668cc-09d6-4ea3-a100-5fe7cd58e9f5\") " pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.723370 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.752005 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.753187 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.756112 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.757261 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.757454 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-94t2p" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.757572 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.787960 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.788024 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n962j\" (UniqueName: \"kubernetes.io/projected/460668cc-09d6-4ea3-a100-5fe7cd58e9f5-kube-api-access-n962j\") pod \"test-operator-controller-manager-7866795846-nlwpx\" (UID: \"460668cc-09d6-4ea3-a100-5fe7cd58e9f5\") " pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.788084 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx8w\" (UniqueName: \"kubernetes.io/projected/70e6e7fe-dfdf-4bcf-815a-7e9c79e77965-kube-api-access-czx8w\") pod \"watcher-operator-controller-manager-747b8fb99f-kxcrx\" (UID: \"70e6e7fe-dfdf-4bcf-815a-7e9c79e77965\") " pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.788321 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.788880 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: E0218 19:34:07.788939 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert podName:400479c2-96d2-4645-abf3-a03726e86cfb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:08.788920123 +0000 UTC m=+951.239332919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" (UID: "400479c2-96d2-4645-abf3-a03726e86cfb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.789315 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.827965 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n962j\" (UniqueName: \"kubernetes.io/projected/460668cc-09d6-4ea3-a100-5fe7cd58e9f5-kube-api-access-n962j\") pod \"test-operator-controller-manager-7866795846-nlwpx\" (UID: \"460668cc-09d6-4ea3-a100-5fe7cd58e9f5\") " pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.890402 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.890524 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czx8w\" (UniqueName: \"kubernetes.io/projected/70e6e7fe-dfdf-4bcf-815a-7e9c79e77965-kube-api-access-czx8w\") pod \"watcher-operator-controller-manager-747b8fb99f-kxcrx\" (UID: \"70e6e7fe-dfdf-4bcf-815a-7e9c79e77965\") " pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.890548 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.890612 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tcdb\" (UniqueName: \"kubernetes.io/projected/7863f879-0b49-4521-9f57-8d90c41dc154-kube-api-access-4tcdb\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.897778 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.900028 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.910696 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh"] Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.916934 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rpmxg" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.923467 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czx8w\" (UniqueName: \"kubernetes.io/projected/70e6e7fe-dfdf-4bcf-815a-7e9c79e77965-kube-api-access-czx8w\") pod \"watcher-operator-controller-manager-747b8fb99f-kxcrx\" (UID: \"70e6e7fe-dfdf-4bcf-815a-7e9c79e77965\") " pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:34:07 crc kubenswrapper[4754]: I0218 19:34:07.971626 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:07.998729 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tcdb\" (UniqueName: \"kubernetes.io/projected/7863f879-0b49-4521-9f57-8d90c41dc154-kube-api-access-4tcdb\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:07.999094 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:07.999216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:07.999809 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:07.999855 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:08.499839398 +0000 UTC m=+950.950252194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "metrics-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:07.999890 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:07.999910 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:08.499902 +0000 UTC m=+950.950314796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.022386 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.025020 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tcdb\" (UniqueName: \"kubernetes.io/projected/7863f879-0b49-4521-9f57-8d90c41dc154-kube-api-access-4tcdb\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.097328 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.097395 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.097451 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.098304 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b71273a5a5eb671bd4925b19c78d15799283dc68e22f711f6ec374c23ac8c87"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.098359 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://0b71273a5a5eb671bd4925b19c78d15799283dc68e22f711f6ec374c23ac8c87" gracePeriod=600 Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.102671 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rp2v\" (UniqueName: \"kubernetes.io/projected/98ea0633-bf9d-410b-bfa5-e71f322755f3-kube-api-access-6rp2v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hmjgh\" (UID: \"98ea0633-bf9d-410b-bfa5-e71f322755f3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.203825 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rp2v\" (UniqueName: \"kubernetes.io/projected/98ea0633-bf9d-410b-bfa5-e71f322755f3-kube-api-access-6rp2v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hmjgh\" (UID: \"98ea0633-bf9d-410b-bfa5-e71f322755f3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.270817 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rp2v\" (UniqueName: \"kubernetes.io/projected/98ea0633-bf9d-410b-bfa5-e71f322755f3-kube-api-access-6rp2v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hmjgh\" (UID: \"98ea0633-bf9d-410b-bfa5-e71f322755f3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.337017 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.428485 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="0b71273a5a5eb671bd4925b19c78d15799283dc68e22f711f6ec374c23ac8c87" exitCode=0 Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.428584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"0b71273a5a5eb671bd4925b19c78d15799283dc68e22f711f6ec374c23ac8c87"} Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.429068 4754 scope.go:117] "RemoveContainer" containerID="e80755a090c368aaa1f52b9e1d9b61931048fa366f94c61a6b1fb41f5ef0c6f5" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.509873 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.511166 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.511252 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:09.511220052 +0000 UTC m=+951.961632908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.514644 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.514748 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:09.51472133 +0000 UTC m=+951.965134126 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "metrics-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.516938 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.618802 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.619134 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.619238 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert podName:9fb89866-aaf8-4478-ae67-7901a682f0e2 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:10.619206147 +0000 UTC m=+953.069618943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert") pod "infra-operator-controller-manager-79d975b745-bbk55" (UID: "9fb89866-aaf8-4478-ae67-7901a682f0e2") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.772185 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp"] Feb 18 19:34:08 crc kubenswrapper[4754]: I0218 19:34:08.821312 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.821546 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:08 crc kubenswrapper[4754]: E0218 19:34:08.821645 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert podName:400479c2-96d2-4645-abf3-a03726e86cfb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:10.821623688 +0000 UTC m=+953.272036484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" (UID: "400479c2-96d2-4645-abf3-a03726e86cfb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.162824 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.197953 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.215897 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9"] Feb 18 19:34:09 crc kubenswrapper[4754]: W0218 19:34:09.235891 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d3d891_2e01_4f49_bb96_45089a1fb702.slice/crio-febb733f5e0eec1c8dcadc61aae09f0c37ebe8dfa9d72c8d66576c6c8200a10f WatchSource:0}: Error finding container febb733f5e0eec1c8dcadc61aae09f0c37ebe8dfa9d72c8d66576c6c8200a10f: Status 404 returned error can't find the container with id febb733f5e0eec1c8dcadc61aae09f0c37ebe8dfa9d72c8d66576c6c8200a10f Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.235993 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.243507 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9"] Feb 18 19:34:09 crc kubenswrapper[4754]: W0218 19:34:09.255771 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode870610a_7a60_4273_bc6f_5512fc6570c2.slice/crio-ddcdfcd0f9fd404d756955b2852512296c418ccd024dccbadbb2b84861a077f7 WatchSource:0}: Error finding container ddcdfcd0f9fd404d756955b2852512296c418ccd024dccbadbb2b84861a077f7: Status 404 returned error can't find the container with id ddcdfcd0f9fd404d756955b2852512296c418ccd024dccbadbb2b84861a077f7 Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.268086 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.277339 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.283519 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.289470 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.441896 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" event={"ID":"8dd7404b-411a-47f3-92f1-a94b2cfade39","Type":"ContainerStarted","Data":"9778143a1906069cc87aad66bc56ffa40d157a49ad384ad646147aa1c9070059"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.443838 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"c0026e2ecf3c88a72909f5c7c0de86e2f4abd80ac8afc7c18f8c5bf2f5f9229e"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.447499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" event={"ID":"01f6e565-c160-40a7-8456-921ecb9980bf","Type":"ContainerStarted","Data":"235fab425d5cb20abf8dc24ef70c325abb2e5a2d206b2f0e81a14e5037c439a8"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.467951 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" event={"ID":"3d21ba73-d3ac-4256-8fa8-451908a2e585","Type":"ContainerStarted","Data":"d46b412bef6720afb408815acd32cc84eeec2b66689bca11351af00d22f3a9f9"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.470006 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" event={"ID":"ff462b12-ae60-4603-a1f2-07eb886af80f","Type":"ContainerStarted","Data":"1719b594a52a4bb887e43c0f125e044b1a61bead98cf6ad8762f360757d2d635"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.472942 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" event={"ID":"b9027f53-0411-4ada-9f8d-31d952c5039e","Type":"ContainerStarted","Data":"ac478d0fc4e1b8c6565040d5e8893d46995e6f5d0a321d9aad13586f4df963d1"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.475303 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" event={"ID":"72d3d891-2e01-4f49-bb96-45089a1fb702","Type":"ContainerStarted","Data":"febb733f5e0eec1c8dcadc61aae09f0c37ebe8dfa9d72c8d66576c6c8200a10f"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.476398 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" event={"ID":"298ec2c6-0d06-413a-a0c5-d381423fb11b","Type":"ContainerStarted","Data":"fb46fe9fd196159351b2ef90ce0206cf39da7531bb7289bf10d00ffc3e37abd2"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.478388 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" event={"ID":"c908b6f1-79ef-4afb-8a99-9bee5b842a47","Type":"ContainerStarted","Data":"f642d17287fdcbafa0eedeb96027185dd97b14f9559ac0330868a80b2b40b20b"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.490460 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" event={"ID":"e870610a-7a60-4273-bc6f-5512fc6570c2","Type":"ContainerStarted","Data":"ddcdfcd0f9fd404d756955b2852512296c418ccd024dccbadbb2b84861a077f7"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.492559 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" event={"ID":"f9f82a7f-36d1-4ff0-9053-a688d0691148","Type":"ContainerStarted","Data":"2a55c5d51cf320e5e44ae100c4131fc3dbd4a15d5bc79bc95dac2a98fd59db84"} Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.508645 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.528068 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.530882 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx"] Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.533220 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.533291 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.534687 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.534745 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:11.534727824 +0000 UTC m=+953.985140620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "metrics-server-cert" not found Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.535242 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.535286 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:11.53527567 +0000 UTC m=+953.985688466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "webhook-server-cert" not found Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.556909 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-7287z"] Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.560931 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czx8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-747b8fb99f-kxcrx_openstack-operators(70e6e7fe-dfdf-4bcf-815a-7e9c79e77965): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.563185 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" podUID="70e6e7fe-dfdf-4bcf-815a-7e9c79e77965" Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.569688 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j"] Feb 18 19:34:09 crc kubenswrapper[4754]: W0218 19:34:09.579356 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e40ef7c_1af7_4981_a2a4_bcc98c82fc58.slice/crio-1827e7ccbf67a5e33022d1f17acdbb1a3368be71bdfb0ac39a702389e659ca48 WatchSource:0}: Error finding container 1827e7ccbf67a5e33022d1f17acdbb1a3368be71bdfb0ac39a702389e659ca48: Status 404 returned error can't find the container with id 1827e7ccbf67a5e33022d1f17acdbb1a3368be71bdfb0ac39a702389e659ca48 Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.584906 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds"] Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.587382 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm7tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-7287z_openstack-operators(94e4720b-a822-4e0b-adb2-3dbcab23d98c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.588922 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" podUID="94e4720b-a822-4e0b-adb2-3dbcab23d98c" Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.589509 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh"] Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.590084 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qfs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-fm9ds_openstack-operators(01e7cd39-1236-497c-a0f5-916631fde3ee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.591371 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" podUID="01e7cd39-1236-497c-a0f5-916631fde3ee" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.593959 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rp2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hmjgh_openstack-operators(98ea0633-bf9d-410b-bfa5-e71f322755f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.595339 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" podUID="98ea0633-bf9d-410b-bfa5-e71f322755f3" Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.597103 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q92wm"] Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.607490 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lq7pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-q92wm_openstack-operators(af3235d8-efd9-4edb-bd48-5f7ec1e41524): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: W0218 19:34:09.608199 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod460668cc_09d6_4ea3_a100_5fe7cd58e9f5.slice/crio-d235a3fb9cbd23b7d51f9b96eb136820926019e3ec6bc9441f84cfd3594b2f19 WatchSource:0}: Error finding container d235a3fb9cbd23b7d51f9b96eb136820926019e3ec6bc9441f84cfd3594b2f19: Status 404 returned error can't find the container with id d235a3fb9cbd23b7d51f9b96eb136820926019e3ec6bc9441f84cfd3594b2f19 Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.608711 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" podUID="af3235d8-efd9-4edb-bd48-5f7ec1e41524" Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.610620 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-nlwpx"] Feb 18 19:34:09 crc kubenswrapper[4754]: W0218 19:34:09.613448 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fdc5e67_9299_4d07_b2b5_b8c766e68469.slice/crio-160a82afcf153eb3669a1be28b4b327cd9eeff4844ee93ff698b353e1887b477 WatchSource:0}: Error finding container 160a82afcf153eb3669a1be28b4b327cd9eeff4844ee93ff698b353e1887b477: Status 404 returned error can't find the container with id 160a82afcf153eb3669a1be28b4b327cd9eeff4844ee93ff698b353e1887b477 Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.614522 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n962j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-nlwpx_openstack-operators(460668cc-09d6-4ea3-a100-5fe7cd58e9f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.615857 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" podUID="460668cc-09d6-4ea3-a100-5fe7cd58e9f5" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.616659 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpjnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-wn272_openstack-operators(9fdc5e67-9299-4d07-b2b5-b8c766e68469): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:34:09 crc kubenswrapper[4754]: E0218 19:34:09.618091 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" podUID="9fdc5e67-9299-4d07-b2b5-b8c766e68469" Feb 18 19:34:09 crc kubenswrapper[4754]: I0218 19:34:09.622660 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-wn272"] Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.530962 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" event={"ID":"af3235d8-efd9-4edb-bd48-5f7ec1e41524","Type":"ContainerStarted","Data":"b911414bd6f1562b6f4a2ae344ac92568422fd0fdc00a9830ea9f8836ff04480"} Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.534707 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" podUID="af3235d8-efd9-4edb-bd48-5f7ec1e41524" Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.536206 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" event={"ID":"70e6e7fe-dfdf-4bcf-815a-7e9c79e77965","Type":"ContainerStarted","Data":"725effc408648fcb1f8e39d76945d7b3424f8fee2ad44ebac95d5698354abd20"} Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.538910 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" podUID="70e6e7fe-dfdf-4bcf-815a-7e9c79e77965" Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.558607 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" event={"ID":"a401ec6a-978c-42d0-9cba-af38aebd03d2","Type":"ContainerStarted","Data":"734209937e0ed43a159ec63c0a36170bb4b7135562e395b17ecab62155c51c62"} Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.561238 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" event={"ID":"9e40ef7c-1af7-4981-a2a4-bcc98c82fc58","Type":"ContainerStarted","Data":"1827e7ccbf67a5e33022d1f17acdbb1a3368be71bdfb0ac39a702389e659ca48"} Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.568565 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" event={"ID":"6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2","Type":"ContainerStarted","Data":"15769523f2ede0550857e576deaf8549530e242e1535e784a18640ccd226b5e8"} Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.583374 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" event={"ID":"9fdc5e67-9299-4d07-b2b5-b8c766e68469","Type":"ContainerStarted","Data":"160a82afcf153eb3669a1be28b4b327cd9eeff4844ee93ff698b353e1887b477"} Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.586814 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" podUID="9fdc5e67-9299-4d07-b2b5-b8c766e68469" Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.597318 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" event={"ID":"98ea0633-bf9d-410b-bfa5-e71f322755f3","Type":"ContainerStarted","Data":"fbe079ab5e7f4ea38bf9aa22b6f55bebc26ee60ab952927fdd0d4e8580448823"} Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.603085 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" event={"ID":"94e4720b-a822-4e0b-adb2-3dbcab23d98c","Type":"ContainerStarted","Data":"7f353a28273051af29c0a82403162ee22bb932c1dacd61f9a11474ce35ee6090"} Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.605888 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" podUID="94e4720b-a822-4e0b-adb2-3dbcab23d98c" Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.606084 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" podUID="98ea0633-bf9d-410b-bfa5-e71f322755f3" Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.606691 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" event={"ID":"01e7cd39-1236-497c-a0f5-916631fde3ee","Type":"ContainerStarted","Data":"57be8e5842f5670f2d83c6229aeb8876c1186806732cb6fc3fd84a77ebd778bf"} Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.626795 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" podUID="01e7cd39-1236-497c-a0f5-916631fde3ee" Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.632738 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" event={"ID":"460668cc-09d6-4ea3-a100-5fe7cd58e9f5","Type":"ContainerStarted","Data":"d235a3fb9cbd23b7d51f9b96eb136820926019e3ec6bc9441f84cfd3594b2f19"} Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.643586 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" podUID="460668cc-09d6-4ea3-a100-5fe7cd58e9f5" Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.660226 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.660918 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.661059 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert podName:9fb89866-aaf8-4478-ae67-7901a682f0e2 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:14.661023039 +0000 UTC m=+957.111435845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert") pod "infra-operator-controller-manager-79d975b745-bbk55" (UID: "9fb89866-aaf8-4478-ae67-7901a682f0e2") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:10 crc kubenswrapper[4754]: I0218 19:34:10.863964 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.864255 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:10 crc kubenswrapper[4754]: E0218 19:34:10.864381 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert podName:400479c2-96d2-4645-abf3-a03726e86cfb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:14.864348739 +0000 UTC m=+957.314761535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" (UID: "400479c2-96d2-4645-abf3-a03726e86cfb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:11 crc kubenswrapper[4754]: I0218 19:34:11.578540 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.578775 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:34:11 crc kubenswrapper[4754]: I0218 19:34:11.579172 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.579233 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:15.579201788 +0000 UTC m=+958.029614584 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "metrics-server-cert" not found Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.579405 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.579568 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:15.579546208 +0000 UTC m=+958.029959004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "webhook-server-cert" not found Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.640169 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" podUID="98ea0633-bf9d-410b-bfa5-e71f322755f3" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.640889 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" podUID="70e6e7fe-dfdf-4bcf-815a-7e9c79e77965" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.640991 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" podUID="9fdc5e67-9299-4d07-b2b5-b8c766e68469" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.646255 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" podUID="94e4720b-a822-4e0b-adb2-3dbcab23d98c" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.648124 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" podUID="460668cc-09d6-4ea3-a100-5fe7cd58e9f5" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.648607 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" podUID="01e7cd39-1236-497c-a0f5-916631fde3ee" Feb 18 19:34:11 crc kubenswrapper[4754]: E0218 19:34:11.648660 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" podUID="af3235d8-efd9-4edb-bd48-5f7ec1e41524" Feb 18 19:34:14 crc kubenswrapper[4754]: I0218 19:34:14.739989 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:14 crc kubenswrapper[4754]: E0218 19:34:14.740267 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:14 crc kubenswrapper[4754]: E0218 19:34:14.741639 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert podName:9fb89866-aaf8-4478-ae67-7901a682f0e2 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:22.741613331 +0000 UTC m=+965.192026127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert") pod "infra-operator-controller-manager-79d975b745-bbk55" (UID: "9fb89866-aaf8-4478-ae67-7901a682f0e2") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:14 crc kubenswrapper[4754]: I0218 19:34:14.945268 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:14 crc kubenswrapper[4754]: E0218 19:34:14.945527 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:14 crc kubenswrapper[4754]: E0218 19:34:14.945632 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert podName:400479c2-96d2-4645-abf3-a03726e86cfb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:22.945609711 +0000 UTC m=+965.396022507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" (UID: "400479c2-96d2-4645-abf3-a03726e86cfb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:15 crc kubenswrapper[4754]: I0218 19:34:15.659572 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:15 crc kubenswrapper[4754]: I0218 19:34:15.659676 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:15 crc kubenswrapper[4754]: E0218 19:34:15.659808 4754 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:34:15 crc kubenswrapper[4754]: E0218 19:34:15.659808 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:34:15 crc kubenswrapper[4754]: E0218 19:34:15.661600 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:23.659856481 +0000 UTC m=+966.110269277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "webhook-server-cert" not found Feb 18 19:34:15 crc kubenswrapper[4754]: E0218 19:34:15.661635 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:23.661625775 +0000 UTC m=+966.112038571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "metrics-server-cert" not found Feb 18 19:34:21 crc kubenswrapper[4754]: E0218 19:34:21.592757 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 18 19:34:21 crc kubenswrapper[4754]: E0218 19:34:21.593823 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksjlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-4h67j_openstack-operators(9e40ef7c-1af7-4981-a2a4-bcc98c82fc58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:21 crc kubenswrapper[4754]: E0218 19:34:21.597615 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" podUID="9e40ef7c-1af7-4981-a2a4-bcc98c82fc58" Feb 18 19:34:21 crc kubenswrapper[4754]: E0218 19:34:21.745369 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" podUID="9e40ef7c-1af7-4981-a2a4-bcc98c82fc58" Feb 18 19:34:22 crc kubenswrapper[4754]: E0218 19:34:22.282040 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 18 19:34:22 crc kubenswrapper[4754]: E0218 19:34:22.282299 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2wzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-s85zg_openstack-operators(72d3d891-2e01-4f49-bb96-45089a1fb702): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:22 crc kubenswrapper[4754]: E0218 19:34:22.283552 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" podUID="72d3d891-2e01-4f49-bb96-45089a1fb702" Feb 18 19:34:22 crc kubenswrapper[4754]: E0218 19:34:22.753915 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" podUID="72d3d891-2e01-4f49-bb96-45089a1fb702" Feb 18 19:34:22 crc kubenswrapper[4754]: I0218 19:34:22.812482 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:22 crc kubenswrapper[4754]: E0218 19:34:22.812728 4754 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:22 crc kubenswrapper[4754]: E0218 19:34:22.812858 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert podName:9fb89866-aaf8-4478-ae67-7901a682f0e2 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:38.812826423 +0000 UTC m=+981.263239409 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert") pod "infra-operator-controller-manager-79d975b745-bbk55" (UID: "9fb89866-aaf8-4478-ae67-7901a682f0e2") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:34:23 crc kubenswrapper[4754]: I0218 19:34:23.017727 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.017970 4754 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.018086 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert podName:400479c2-96d2-4645-abf3-a03726e86cfb nodeName:}" failed. No retries permitted until 2026-02-18 19:34:39.018056921 +0000 UTC m=+981.468469907 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" (UID: "400479c2-96d2-4645-abf3-a03726e86cfb") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.685813 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.686081 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42smc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-jgbhw_openstack-operators(e870610a-7a60-4273-bc6f-5512fc6570c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.687575 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" podUID="e870610a-7a60-4273-bc6f-5512fc6570c2" Feb 18 19:34:23 crc kubenswrapper[4754]: I0218 19:34:23.728712 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:23 crc kubenswrapper[4754]: I0218 19:34:23.728840 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.729064 4754 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.729139 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs podName:7863f879-0b49-4521-9f57-8d90c41dc154 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:39.729113483 +0000 UTC m=+982.179526289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs") pod "openstack-operator-controller-manager-6b6dff6c94-jfn8q" (UID: "7863f879-0b49-4521-9f57-8d90c41dc154") : secret "metrics-server-cert" not found Feb 18 19:34:23 crc kubenswrapper[4754]: I0218 19:34:23.743216 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-webhook-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:23 crc kubenswrapper[4754]: E0218 19:34:23.769258 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" podUID="e870610a-7a60-4273-bc6f-5512fc6570c2" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.789614 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" event={"ID":"3d21ba73-d3ac-4256-8fa8-451908a2e585","Type":"ContainerStarted","Data":"de4ed81095e1dd3c553e65b2a237ee9678d01d50f499ddbd6c1e96c53cfc9dac"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.790014 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.810220 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" event={"ID":"ff462b12-ae60-4603-a1f2-07eb886af80f","Type":"ContainerStarted","Data":"8d63ad59ae8c1922b436cf32720fffb420e93aabd69700c07a1be78365e70a29"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.810330 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.822243 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" event={"ID":"b9027f53-0411-4ada-9f8d-31d952c5039e","Type":"ContainerStarted","Data":"674b93e766e0ce0176344c2bca3bd21cd9d0b4fed99a78e84739b0ddf449402d"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.822342 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.826779 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" event={"ID":"f9f82a7f-36d1-4ff0-9053-a688d0691148","Type":"ContainerStarted","Data":"4751ea79d71dd39a0456a1047c117214f1bb8f4fd609081621b8a38264a733a2"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.827473 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.832578 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" event={"ID":"a401ec6a-978c-42d0-9cba-af38aebd03d2","Type":"ContainerStarted","Data":"78e55c809bfe99df981c09419fafc6c8304283cfb182f7d4d793d840936f415a"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.832958 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.845976 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" event={"ID":"298ec2c6-0d06-413a-a0c5-d381423fb11b","Type":"ContainerStarted","Data":"7c807668719336dc239e62a9b608fac8612a524526aaa06e508f826aae77200d"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.846299 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.849465 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" event={"ID":"8dd7404b-411a-47f3-92f1-a94b2cfade39","Type":"ContainerStarted","Data":"e958afda6a94824bc51bab332564961b0e1da7c48c1d5ebe285d789963918f64"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.850402 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.856526 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" event={"ID":"6e2e06c8-6b3f-48d8-9b24-acb4fdbfb2e2","Type":"ContainerStarted","Data":"f3071386215fead510a2cd40a3bed61f8504fb28cddc2f69efc56f6f5e6cadea"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.856984 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.877009 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" event={"ID":"01f6e565-c160-40a7-8456-921ecb9980bf","Type":"ContainerStarted","Data":"dcff9817e095f78bb016f93bf6a78180bba651666195601cba0a063ef2c92e56"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.877710 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.894375 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" event={"ID":"c908b6f1-79ef-4afb-8a99-9bee5b842a47","Type":"ContainerStarted","Data":"6dca16ee603724a3409e3f31d89311ad28cb5121f6ba6e0de0538fe5bbb11d2b"} Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.895334 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:24 crc kubenswrapper[4754]: I0218 19:34:24.903986 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" podStartSLOduration=4.406543629 podStartE2EDuration="18.903967809s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.216495274 +0000 UTC m=+951.666908070" lastFinishedPulling="2026-02-18 19:34:23.713919454 +0000 UTC m=+966.164332250" observedRunningTime="2026-02-18 19:34:24.901115761 +0000 UTC m=+967.351528557" watchObservedRunningTime="2026-02-18 19:34:24.903967809 +0000 UTC m=+967.354380605" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.027934 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" podStartSLOduration=3.851382718 podStartE2EDuration="18.027907376s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.53720998 +0000 UTC m=+951.987622786" lastFinishedPulling="2026-02-18 19:34:23.713734638 +0000 UTC m=+966.164147444" observedRunningTime="2026-02-18 19:34:25.026811253 +0000 UTC m=+967.477224049" watchObservedRunningTime="2026-02-18 19:34:25.027907376 +0000 UTC m=+967.478320172" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.074415 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" podStartSLOduration=4.156196239 podStartE2EDuration="19.074387683s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:08.792406847 +0000 UTC m=+951.242819643" lastFinishedPulling="2026-02-18 19:34:23.710598291 +0000 UTC m=+966.161011087" observedRunningTime="2026-02-18 19:34:25.073239017 +0000 UTC m=+967.523651813" watchObservedRunningTime="2026-02-18 19:34:25.074387683 +0000 UTC m=+967.524800479" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.160305 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" podStartSLOduration=4.6583192350000004 podStartE2EDuration="19.160281705s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.213232113 +0000 UTC m=+951.663644909" lastFinishedPulling="2026-02-18 19:34:23.715194583 +0000 UTC m=+966.165607379" observedRunningTime="2026-02-18 19:34:25.144300422 +0000 UTC m=+967.594713218" watchObservedRunningTime="2026-02-18 19:34:25.160281705 +0000 UTC m=+967.610694501" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.216064 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" podStartSLOduration=4.780678245 podStartE2EDuration="19.216020317s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.281134131 +0000 UTC m=+951.731546927" lastFinishedPulling="2026-02-18 19:34:23.716476193 +0000 UTC m=+966.166888999" observedRunningTime="2026-02-18 19:34:25.20511538 +0000 UTC m=+967.655528176" watchObservedRunningTime="2026-02-18 19:34:25.216020317 +0000 UTC m=+967.666433113" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.258099 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" podStartSLOduration=4.806260714 podStartE2EDuration="19.258068535s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.266329163 +0000 UTC m=+951.716741959" lastFinishedPulling="2026-02-18 19:34:23.718136984 +0000 UTC m=+966.168549780" observedRunningTime="2026-02-18 19:34:25.255233798 +0000 UTC m=+967.705646594" watchObservedRunningTime="2026-02-18 19:34:25.258068535 +0000 UTC m=+967.708481331" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.303501 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" podStartSLOduration=4.875752051 podStartE2EDuration="19.303478358s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.287362273 +0000 UTC m=+951.737775079" lastFinishedPulling="2026-02-18 19:34:23.71508858 +0000 UTC m=+966.165501386" observedRunningTime="2026-02-18 19:34:25.301482837 +0000 UTC m=+967.751895633" watchObservedRunningTime="2026-02-18 19:34:25.303478358 +0000 UTC m=+967.753891154" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.351379 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" podStartSLOduration=4.811811657 podStartE2EDuration="19.351360097s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.172839306 +0000 UTC m=+951.623252092" lastFinishedPulling="2026-02-18 19:34:23.712387736 +0000 UTC m=+966.162800532" observedRunningTime="2026-02-18 19:34:25.348698375 +0000 UTC m=+967.799111171" watchObservedRunningTime="2026-02-18 19:34:25.351360097 +0000 UTC m=+967.801772893" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.376834 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" podStartSLOduration=4.921957699 podStartE2EDuration="19.376803843s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.257865072 +0000 UTC m=+951.708277868" lastFinishedPulling="2026-02-18 19:34:23.712711196 +0000 UTC m=+966.163124012" observedRunningTime="2026-02-18 19:34:25.374518452 +0000 UTC m=+967.824931248" watchObservedRunningTime="2026-02-18 19:34:25.376803843 +0000 UTC m=+967.827216639" Feb 18 19:34:25 crc kubenswrapper[4754]: I0218 19:34:25.418404 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" podStartSLOduration=5.24219799 podStartE2EDuration="19.418385537s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.548976924 +0000 UTC m=+951.999389720" lastFinishedPulling="2026-02-18 19:34:23.725164461 +0000 UTC m=+966.175577267" observedRunningTime="2026-02-18 19:34:25.415555259 +0000 UTC m=+967.865968055" watchObservedRunningTime="2026-02-18 19:34:25.418385537 +0000 UTC m=+967.868798333" Feb 18 19:34:33 crc kubenswrapper[4754]: I0218 19:34:33.215468 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:34:36 crc kubenswrapper[4754]: E0218 19:34:36.378861 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 18 19:34:36 crc kubenswrapper[4754]: E0218 19:34:36.379460 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lq7pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-q92wm_openstack-operators(af3235d8-efd9-4edb-bd48-5f7ec1e41524): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:36 crc kubenswrapper[4754]: E0218 19:34:36.380701 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" podUID="af3235d8-efd9-4edb-bd48-5f7ec1e41524" Feb 18 19:34:36 crc kubenswrapper[4754]: I0218 19:34:36.871809 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pqkzp" Feb 18 19:34:36 crc kubenswrapper[4754]: I0218 19:34:36.957700 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-btfr9" Feb 18 19:34:36 crc kubenswrapper[4754]: E0218 19:34:36.982599 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 18 19:34:36 crc kubenswrapper[4754]: E0218 19:34:36.982930 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n962j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-nlwpx_openstack-operators(460668cc-09d6-4ea3-a100-5fe7cd58e9f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:36 crc kubenswrapper[4754]: E0218 19:34:36.984331 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" podUID="460668cc-09d6-4ea3-a100-5fe7cd58e9f5" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.153574 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-c56gw" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.322222 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-prm45" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.323478 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kpj7s" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.345572 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-gvj77" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.350435 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-54jtq" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.635327 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-nkmr9" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.640177 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-hsrft" Feb 18 19:34:37 crc kubenswrapper[4754]: I0218 19:34:37.794592 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-g7dh6" Feb 18 19:34:38 crc kubenswrapper[4754]: I0218 19:34:38.908856 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:38 crc kubenswrapper[4754]: I0218 19:34:38.937170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fb89866-aaf8-4478-ae67-7901a682f0e2-cert\") pod \"infra-operator-controller-manager-79d975b745-bbk55\" (UID: \"9fb89866-aaf8-4478-ae67-7901a682f0e2\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:38 crc kubenswrapper[4754]: I0218 19:34:38.967303 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6mlcg" Feb 18 19:34:38 crc kubenswrapper[4754]: I0218 19:34:38.975859 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:39 crc kubenswrapper[4754]: I0218 19:34:39.110643 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:39 crc kubenswrapper[4754]: I0218 19:34:39.116190 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/400479c2-96d2-4645-abf3-a03726e86cfb-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q\" (UID: \"400479c2-96d2-4645-abf3-a03726e86cfb\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.126726 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.126963 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm7tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-7287z_openstack-operators(94e4720b-a822-4e0b-adb2-3dbcab23d98c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.128215 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" podUID="94e4720b-a822-4e0b-adb2-3dbcab23d98c" Feb 18 19:34:39 crc kubenswrapper[4754]: I0218 19:34:39.297777 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pj7wx" Feb 18 19:34:39 crc kubenswrapper[4754]: I0218 19:34:39.306439 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.769383 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.769458 4754 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.769745 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czx8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-747b8fb99f-kxcrx_openstack-operators(70e6e7fe-dfdf-4bcf-815a-7e9c79e77965): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:39 crc kubenswrapper[4754]: E0218 19:34:39.772250 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" podUID="70e6e7fe-dfdf-4bcf-815a-7e9c79e77965" Feb 18 19:34:39 crc kubenswrapper[4754]: I0218 19:34:39.821575 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:39 crc kubenswrapper[4754]: I0218 19:34:39.826842 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7863f879-0b49-4521-9f57-8d90c41dc154-metrics-certs\") pod \"openstack-operator-controller-manager-6b6dff6c94-jfn8q\" (UID: \"7863f879-0b49-4521-9f57-8d90c41dc154\") " pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:40 crc kubenswrapper[4754]: I0218 19:34:40.123385 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-94t2p" Feb 18 19:34:40 crc kubenswrapper[4754]: I0218 19:34:40.131599 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:42 crc kubenswrapper[4754]: E0218 19:34:42.116250 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 19:34:42 crc kubenswrapper[4754]: E0218 19:34:42.116933 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rp2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hmjgh_openstack-operators(98ea0633-bf9d-410b-bfa5-e71f322755f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:42 crc kubenswrapper[4754]: E0218 19:34:42.118451 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" podUID="98ea0633-bf9d-410b-bfa5-e71f322755f3" Feb 18 19:34:42 crc kubenswrapper[4754]: I0218 19:34:42.997562 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q"] Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.081712 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q"] Feb 18 19:34:43 crc kubenswrapper[4754]: W0218 19:34:43.091488 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400479c2_96d2_4645_abf3_a03726e86cfb.slice/crio-a579af075970ada27f41e0575d8c8243896eeb87bbea888ea5db9633de980bdb WatchSource:0}: Error finding container a579af075970ada27f41e0575d8c8243896eeb87bbea888ea5db9633de980bdb: Status 404 returned error can't find the container with id a579af075970ada27f41e0575d8c8243896eeb87bbea888ea5db9633de980bdb Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.108506 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-bbk55"] Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.294664 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" event={"ID":"9fb89866-aaf8-4478-ae67-7901a682f0e2","Type":"ContainerStarted","Data":"429ad76ea514bbc3ebe6d68c9060cec8f9024084a460376d1a869dbba555e9dd"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.296994 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" event={"ID":"e870610a-7a60-4273-bc6f-5512fc6570c2","Type":"ContainerStarted","Data":"219b4ad1c203c8c0ac3f0141944a2ed9a752b7ad603dcb8235ade756bdd29441"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.297951 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.299606 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" event={"ID":"72d3d891-2e01-4f49-bb96-45089a1fb702","Type":"ContainerStarted","Data":"0d519412c0e3dc27f63a8c3a3bf6d9605255c1686266bd5afe54d792d28c321e"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.300329 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.302592 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" event={"ID":"01e7cd39-1236-497c-a0f5-916631fde3ee","Type":"ContainerStarted","Data":"f4ce5c064e71fa612736145ef30fc235ae46b6d10d61a97469c2d2211c7611f9"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.302883 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.305530 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" event={"ID":"9e40ef7c-1af7-4981-a2a4-bcc98c82fc58","Type":"ContainerStarted","Data":"88e26d5f24adb115d045abd8478a37bbb4f1e1d1730c17e1800a635af635ec93"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.306505 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.312416 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" event={"ID":"7863f879-0b49-4521-9f57-8d90c41dc154","Type":"ContainerStarted","Data":"d2d9a49d93404ae14091bfcf4f03b5c27df670c58f75af74413adfc531eb1359"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.312469 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" event={"ID":"7863f879-0b49-4521-9f57-8d90c41dc154","Type":"ContainerStarted","Data":"fbf7f34bc928f4b37266f0593e4e2a0c14ecbed554e5b50b06386da29e52b30c"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.312496 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.313660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" event={"ID":"400479c2-96d2-4645-abf3-a03726e86cfb","Type":"ContainerStarted","Data":"a579af075970ada27f41e0575d8c8243896eeb87bbea888ea5db9633de980bdb"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.315464 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" event={"ID":"9fdc5e67-9299-4d07-b2b5-b8c766e68469","Type":"ContainerStarted","Data":"87f882ab11739a52d352a940dc7d742d007d1ee9a7b5a3beac10a10d9c537850"} Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.315894 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.320758 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" podStartSLOduration=3.995409937 podStartE2EDuration="37.320739676s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.268733068 +0000 UTC m=+951.719145854" lastFinishedPulling="2026-02-18 19:34:42.594062797 +0000 UTC m=+985.044475593" observedRunningTime="2026-02-18 19:34:43.315623559 +0000 UTC m=+985.766036355" watchObservedRunningTime="2026-02-18 19:34:43.320739676 +0000 UTC m=+985.771152472" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.339409 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" podStartSLOduration=3.619992611 podStartE2EDuration="36.339382996s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.616408716 +0000 UTC m=+952.066936865" lastFinishedPulling="2026-02-18 19:34:42.335914454 +0000 UTC m=+984.786327250" observedRunningTime="2026-02-18 19:34:43.333479476 +0000 UTC m=+985.783892272" watchObservedRunningTime="2026-02-18 19:34:43.339382996 +0000 UTC m=+985.789795792" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.377876 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" podStartSLOduration=4.632128794 podStartE2EDuration="37.377849522s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.589930478 +0000 UTC m=+952.040343274" lastFinishedPulling="2026-02-18 19:34:42.335651196 +0000 UTC m=+984.786064002" observedRunningTime="2026-02-18 19:34:43.370763715 +0000 UTC m=+985.821176511" watchObservedRunningTime="2026-02-18 19:34:43.377849522 +0000 UTC m=+985.828262318" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.434688 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" podStartSLOduration=4.085238466 podStartE2EDuration="37.43466306s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.244636263 +0000 UTC m=+951.695049069" lastFinishedPulling="2026-02-18 19:34:42.594060867 +0000 UTC m=+985.044473663" observedRunningTime="2026-02-18 19:34:43.401187786 +0000 UTC m=+985.851600582" watchObservedRunningTime="2026-02-18 19:34:43.43466306 +0000 UTC m=+985.885075856" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.435574 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" podStartSLOduration=4.445509377 podStartE2EDuration="37.435566527s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.587206994 +0000 UTC m=+952.037619800" lastFinishedPulling="2026-02-18 19:34:42.577264144 +0000 UTC m=+985.027676950" observedRunningTime="2026-02-18 19:34:43.42947377 +0000 UTC m=+985.879886566" watchObservedRunningTime="2026-02-18 19:34:43.435566527 +0000 UTC m=+985.885979323" Feb 18 19:34:43 crc kubenswrapper[4754]: I0218 19:34:43.465357 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" podStartSLOduration=36.465334798 podStartE2EDuration="36.465334798s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:43.459343494 +0000 UTC m=+985.909756290" watchObservedRunningTime="2026-02-18 19:34:43.465334798 +0000 UTC m=+985.915747594" Feb 18 19:34:47 crc kubenswrapper[4754]: I0218 19:34:47.470271 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-fm9ds" Feb 18 19:34:47 crc kubenswrapper[4754]: I0218 19:34:47.498048 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-4h67j" Feb 18 19:34:47 crc kubenswrapper[4754]: I0218 19:34:47.575029 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-jgbhw" Feb 18 19:34:47 crc kubenswrapper[4754]: I0218 19:34:47.793871 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-wn272" Feb 18 19:34:48 crc kubenswrapper[4754]: I0218 19:34:48.374335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" event={"ID":"400479c2-96d2-4645-abf3-a03726e86cfb","Type":"ContainerStarted","Data":"fb906d30cf9c0d2d5272296c23f08c0da460d8282a15eca6d32710f28f54c607"} Feb 18 19:34:48 crc kubenswrapper[4754]: I0218 19:34:48.374476 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:34:48 crc kubenswrapper[4754]: I0218 19:34:48.376551 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" event={"ID":"9fb89866-aaf8-4478-ae67-7901a682f0e2","Type":"ContainerStarted","Data":"1898a518f0ac4f251782d53870d5592b911f114d6436f1e938bb168fa76fb317"} Feb 18 19:34:48 crc kubenswrapper[4754]: I0218 19:34:48.376750 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:48 crc kubenswrapper[4754]: I0218 19:34:48.413365 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" podStartSLOduration=38.481074133999996 podStartE2EDuration="42.413338758s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:43.094460637 +0000 UTC m=+985.544873433" lastFinishedPulling="2026-02-18 19:34:47.026725261 +0000 UTC m=+989.477138057" observedRunningTime="2026-02-18 19:34:48.40785882 +0000 UTC m=+990.858271636" watchObservedRunningTime="2026-02-18 19:34:48.413338758 +0000 UTC m=+990.863751554" Feb 18 19:34:48 crc kubenswrapper[4754]: I0218 19:34:48.427502 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" podStartSLOduration=38.519676324 podStartE2EDuration="42.42748015s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:43.101850023 +0000 UTC m=+985.552262819" lastFinishedPulling="2026-02-18 19:34:47.009653849 +0000 UTC m=+989.460066645" observedRunningTime="2026-02-18 19:34:48.424570761 +0000 UTC m=+990.874983557" watchObservedRunningTime="2026-02-18 19:34:48.42748015 +0000 UTC m=+990.877892946" Feb 18 19:34:50 crc kubenswrapper[4754]: I0218 19:34:50.139995 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6dff6c94-jfn8q" Feb 18 19:34:51 crc kubenswrapper[4754]: E0218 19:34:51.212215 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" podUID="af3235d8-efd9-4edb-bd48-5f7ec1e41524" Feb 18 19:34:52 crc kubenswrapper[4754]: E0218 19:34:52.214678 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" podUID="460668cc-09d6-4ea3-a100-5fe7cd58e9f5" Feb 18 19:34:53 crc kubenswrapper[4754]: E0218 19:34:53.211556 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.98:5001/openstack-k8s-operators/watcher-operator:b81fb4c6e252d904b45b75754882e721f2b86114\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" podUID="70e6e7fe-dfdf-4bcf-815a-7e9c79e77965" Feb 18 19:34:53 crc kubenswrapper[4754]: E0218 19:34:53.211687 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" podUID="94e4720b-a822-4e0b-adb2-3dbcab23d98c" Feb 18 19:34:56 crc kubenswrapper[4754]: E0218 19:34:56.212200 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" podUID="98ea0633-bf9d-410b-bfa5-e71f322755f3" Feb 18 19:34:56 crc kubenswrapper[4754]: I0218 19:34:56.927349 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-s85zg" Feb 18 19:34:58 crc kubenswrapper[4754]: I0218 19:34:58.982265 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-bbk55" Feb 18 19:34:59 crc kubenswrapper[4754]: I0218 19:34:59.315641 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvg25q" Feb 18 19:35:05 crc kubenswrapper[4754]: I0218 19:35:05.520451 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" event={"ID":"af3235d8-efd9-4edb-bd48-5f7ec1e41524","Type":"ContainerStarted","Data":"2b6769a6db7c661a3656ee2dfe0c00c1ac7cefcdb4c2d0619872b49944ff9311"} Feb 18 19:35:05 crc kubenswrapper[4754]: I0218 19:35:05.521254 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:35:05 crc kubenswrapper[4754]: I0218 19:35:05.539780 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" podStartSLOduration=3.36590045 podStartE2EDuration="58.539757376s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.607261434 +0000 UTC m=+952.057674230" lastFinishedPulling="2026-02-18 19:35:04.78111836 +0000 UTC m=+1007.231531156" observedRunningTime="2026-02-18 19:35:05.538314222 +0000 UTC m=+1007.988727028" watchObservedRunningTime="2026-02-18 19:35:05.539757376 +0000 UTC m=+1007.990170172" Feb 18 19:35:06 crc kubenswrapper[4754]: I0218 19:35:06.529036 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" event={"ID":"70e6e7fe-dfdf-4bcf-815a-7e9c79e77965","Type":"ContainerStarted","Data":"93ef4a85ae0cc3310c88e836f9a71de8f3540b16d6abd9573a3df319e455008a"} Feb 18 19:35:06 crc kubenswrapper[4754]: I0218 19:35:06.529355 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:35:06 crc kubenswrapper[4754]: I0218 19:35:06.551848 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" podStartSLOduration=2.84279671 podStartE2EDuration="59.551818141s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.560772317 +0000 UTC m=+952.011185113" lastFinishedPulling="2026-02-18 19:35:06.269793748 +0000 UTC m=+1008.720206544" observedRunningTime="2026-02-18 19:35:06.54298024 +0000 UTC m=+1008.993393036" watchObservedRunningTime="2026-02-18 19:35:06.551818141 +0000 UTC m=+1009.002230937" Feb 18 19:35:08 crc kubenswrapper[4754]: I0218 19:35:08.546611 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" event={"ID":"94e4720b-a822-4e0b-adb2-3dbcab23d98c","Type":"ContainerStarted","Data":"fe1b5f6cd05743b73af04369dd844c3cb20d2ef6e4b5210814da4857cb576410"} Feb 18 19:35:08 crc kubenswrapper[4754]: I0218 19:35:08.547124 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:35:08 crc kubenswrapper[4754]: I0218 19:35:08.548526 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" event={"ID":"460668cc-09d6-4ea3-a100-5fe7cd58e9f5","Type":"ContainerStarted","Data":"3b65bd02d0630f4ab55086e3ca71341c8eb464d60bf23bea98695c39d4d5e8ec"} Feb 18 19:35:08 crc kubenswrapper[4754]: I0218 19:35:08.548731 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:35:08 crc kubenswrapper[4754]: I0218 19:35:08.569944 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" podStartSLOduration=4.544163684 podStartE2EDuration="1m2.569917446s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.587245925 +0000 UTC m=+952.037658721" lastFinishedPulling="2026-02-18 19:35:07.612999677 +0000 UTC m=+1010.063412483" observedRunningTime="2026-02-18 19:35:08.56839612 +0000 UTC m=+1011.018808956" watchObservedRunningTime="2026-02-18 19:35:08.569917446 +0000 UTC m=+1011.020330242" Feb 18 19:35:08 crc kubenswrapper[4754]: I0218 19:35:08.589410 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" podStartSLOduration=3.527734299 podStartE2EDuration="1m1.589388201s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.614317211 +0000 UTC m=+952.064730007" lastFinishedPulling="2026-02-18 19:35:07.675971113 +0000 UTC m=+1010.126383909" observedRunningTime="2026-02-18 19:35:08.586211785 +0000 UTC m=+1011.036624581" watchObservedRunningTime="2026-02-18 19:35:08.589388201 +0000 UTC m=+1011.039800997" Feb 18 19:35:11 crc kubenswrapper[4754]: I0218 19:35:11.575269 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" event={"ID":"98ea0633-bf9d-410b-bfa5-e71f322755f3","Type":"ContainerStarted","Data":"da94b388651e153222c2c699207f425a6c0a85200f9b5bc30dc603bd01a3c7ff"} Feb 18 19:35:11 crc kubenswrapper[4754]: I0218 19:35:11.603461 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hmjgh" podStartSLOduration=3.232353803 podStartE2EDuration="1m4.603424169s" podCreationTimestamp="2026-02-18 19:34:07 +0000 UTC" firstStartedPulling="2026-02-18 19:34:09.593307083 +0000 UTC m=+952.043719879" lastFinishedPulling="2026-02-18 19:35:10.964377439 +0000 UTC m=+1013.414790245" observedRunningTime="2026-02-18 19:35:11.596882959 +0000 UTC m=+1014.047295755" watchObservedRunningTime="2026-02-18 19:35:11.603424169 +0000 UTC m=+1014.053837015" Feb 18 19:35:17 crc kubenswrapper[4754]: I0218 19:35:17.411477 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-7287z" Feb 18 19:35:17 crc kubenswrapper[4754]: I0218 19:35:17.726502 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q92wm" Feb 18 19:35:17 crc kubenswrapper[4754]: I0218 19:35:17.974465 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-nlwpx" Feb 18 19:35:18 crc kubenswrapper[4754]: I0218 19:35:18.026026 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-747b8fb99f-kxcrx" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.580469 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c7dzm"] Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.582841 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.586247 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.586272 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kg9fz" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.586564 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.589524 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.589834 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c7dzm"] Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.647094 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l8295"] Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.649326 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.653253 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.668725 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l8295"] Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.764264 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8gw\" (UniqueName: \"kubernetes.io/projected/f66fd58b-e940-4e0f-bf09-ceceaba2315f-kube-api-access-np8gw\") pod \"dnsmasq-dns-675f4bcbfc-c7dzm\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.764371 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszvd\" (UniqueName: \"kubernetes.io/projected/b6756083-12c5-4e8a-a53f-bb5191a537ff-kube-api-access-jszvd\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.764405 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66fd58b-e940-4e0f-bf09-ceceaba2315f-config\") pod \"dnsmasq-dns-675f4bcbfc-c7dzm\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.765090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-config\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.765263 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.867107 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8gw\" (UniqueName: \"kubernetes.io/projected/f66fd58b-e940-4e0f-bf09-ceceaba2315f-kube-api-access-np8gw\") pod \"dnsmasq-dns-675f4bcbfc-c7dzm\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.867243 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszvd\" (UniqueName: \"kubernetes.io/projected/b6756083-12c5-4e8a-a53f-bb5191a537ff-kube-api-access-jszvd\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.867301 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66fd58b-e940-4e0f-bf09-ceceaba2315f-config\") pod \"dnsmasq-dns-675f4bcbfc-c7dzm\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.867339 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-config\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.867389 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.868418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66fd58b-e940-4e0f-bf09-ceceaba2315f-config\") pod \"dnsmasq-dns-675f4bcbfc-c7dzm\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.868546 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.868605 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-config\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.890890 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8gw\" (UniqueName: \"kubernetes.io/projected/f66fd58b-e940-4e0f-bf09-ceceaba2315f-kube-api-access-np8gw\") pod \"dnsmasq-dns-675f4bcbfc-c7dzm\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.891345 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszvd\" (UniqueName: \"kubernetes.io/projected/b6756083-12c5-4e8a-a53f-bb5191a537ff-kube-api-access-jszvd\") pod \"dnsmasq-dns-78dd6ddcc-l8295\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.900176 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:35:36 crc kubenswrapper[4754]: I0218 19:35:36.977607 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:35:38 crc kubenswrapper[4754]: I0218 19:35:38.059586 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c7dzm"] Feb 18 19:35:38 crc kubenswrapper[4754]: I0218 19:35:38.104832 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" event={"ID":"f66fd58b-e940-4e0f-bf09-ceceaba2315f","Type":"ContainerStarted","Data":"4507a9f08a82259702fbd4102eb8f6f75121406226507a4c019bed708977f726"} Feb 18 19:35:38 crc kubenswrapper[4754]: I0218 19:35:38.747364 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l8295"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.008814 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c7dzm"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.060063 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bsnzd"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.061713 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.074303 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bsnzd"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.125076 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" event={"ID":"b6756083-12c5-4e8a-a53f-bb5191a537ff","Type":"ContainerStarted","Data":"f0fa33d7c42cb8f97cac96dbff6941307665fa189bf2378e1f99bdcff16da83b"} Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.200793 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9549\" (UniqueName: \"kubernetes.io/projected/d3a2e916-fb78-41e4-966b-c0c613144506-kube-api-access-q9549\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.200951 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-config\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.201022 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.302563 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9549\" (UniqueName: \"kubernetes.io/projected/d3a2e916-fb78-41e4-966b-c0c613144506-kube-api-access-q9549\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.302680 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-config\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.302756 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.303724 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.305177 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-config\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.324739 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l8295"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.334545 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9549\" (UniqueName: \"kubernetes.io/projected/d3a2e916-fb78-41e4-966b-c0c613144506-kube-api-access-q9549\") pod \"dnsmasq-dns-666b6646f7-bsnzd\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.395971 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.685586 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-54ppg"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.687434 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.702371 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-54ppg"] Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.726759 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8lr\" (UniqueName: \"kubernetes.io/projected/bd6d05cb-564c-44d7-83c8-d3487e363533-kube-api-access-jq8lr\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.726828 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.726903 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-config\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.835478 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-config\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.835581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq8lr\" (UniqueName: \"kubernetes.io/projected/bd6d05cb-564c-44d7-83c8-d3487e363533-kube-api-access-jq8lr\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.835604 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.836559 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.838421 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-config\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:39 crc kubenswrapper[4754]: I0218 19:35:39.859372 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq8lr\" (UniqueName: \"kubernetes.io/projected/bd6d05cb-564c-44d7-83c8-d3487e363533-kube-api-access-jq8lr\") pod \"dnsmasq-dns-57d769cc4f-54ppg\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.038038 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.110091 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bsnzd"] Feb 18 19:35:40 crc kubenswrapper[4754]: W0218 19:35:40.118948 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a2e916_fb78_41e4_966b_c0c613144506.slice/crio-90c7a185ddfb44dcde401471e03a5cfe071f7e41cd597ee765b3e881a61e0c6c WatchSource:0}: Error finding container 90c7a185ddfb44dcde401471e03a5cfe071f7e41cd597ee765b3e881a61e0c6c: Status 404 returned error can't find the container with id 90c7a185ddfb44dcde401471e03a5cfe071f7e41cd597ee765b3e881a61e0c6c Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.134384 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" event={"ID":"d3a2e916-fb78-41e4-966b-c0c613144506","Type":"ContainerStarted","Data":"90c7a185ddfb44dcde401471e03a5cfe071f7e41cd597ee765b3e881a61e0c6c"} Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.190413 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.192318 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.194504 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.196805 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.197218 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.197253 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.197810 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.197834 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-86n4h" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.197545 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.226281 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.343951 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344023 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344046 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344075 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344234 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344359 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344387 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgnj\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-kube-api-access-kqgnj\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344460 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344483 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344512 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.344599 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446478 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446549 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446576 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446613 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446651 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446685 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446705 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgnj\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-kube-api-access-kqgnj\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446734 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446780 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.446806 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.448207 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.448548 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.449285 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-config-data\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.449518 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.449743 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.453838 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.456293 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.457062 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.458347 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.463073 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.470847 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgnj\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-kube-api-access-kqgnj\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.481728 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.589856 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.738481 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.746838 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.759518 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.759819 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.760260 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8zt5q" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.760594 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.760671 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.760710 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.760810 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.778898 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.795681 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.795761 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.795929 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796026 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d87128e7-abb0-4dd7-9b9f-04a4393c2313-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796077 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hfj\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-kube-api-access-l6hfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796206 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796288 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d87128e7-abb0-4dd7-9b9f-04a4393c2313-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796380 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796425 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.796506 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.901737 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.901891 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.901949 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.901977 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902017 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d87128e7-abb0-4dd7-9b9f-04a4393c2313-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902050 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6hfj\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-kube-api-access-l6hfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902106 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902178 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902211 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d87128e7-abb0-4dd7-9b9f-04a4393c2313-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902248 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.902282 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.903485 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.904650 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.904695 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.905599 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.906181 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.907988 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.914706 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.915291 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.929813 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d87128e7-abb0-4dd7-9b9f-04a4393c2313-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.930161 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6hfj\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-kube-api-access-l6hfj\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.933579 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d87128e7-abb0-4dd7-9b9f-04a4393c2313-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.960461 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:40 crc kubenswrapper[4754]: I0218 19:35:40.968509 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-54ppg"] Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.148227 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" event={"ID":"bd6d05cb-564c-44d7-83c8-d3487e363533","Type":"ContainerStarted","Data":"3f9d31c0bef58a76c713ea587e262967e5c283e26eca6f6ec1c4af853c24de79"} Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.161864 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.480564 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.612246 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.884955 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.886927 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.897185 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.898803 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.899538 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7bggd" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.899669 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.901912 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.901924 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939433 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-kolla-config\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939491 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b735ae88-5b0c-47df-ac6a-a9dfac565b59-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939574 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-config-data-default\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939617 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939660 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b735ae88-5b0c-47df-ac6a-a9dfac565b59-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939687 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939756 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b735ae88-5b0c-47df-ac6a-a9dfac565b59-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:41 crc kubenswrapper[4754]: I0218 19:35:41.939782 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnc4\" (UniqueName: \"kubernetes.io/projected/b735ae88-5b0c-47df-ac6a-a9dfac565b59-kube-api-access-nxnc4\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.042006 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b735ae88-5b0c-47df-ac6a-a9dfac565b59-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.042079 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.042682 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b735ae88-5b0c-47df-ac6a-a9dfac565b59-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.042757 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxnc4\" (UniqueName: \"kubernetes.io/projected/b735ae88-5b0c-47df-ac6a-a9dfac565b59-kube-api-access-nxnc4\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.042849 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-kolla-config\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.042896 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b735ae88-5b0c-47df-ac6a-a9dfac565b59-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.043095 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-config-data-default\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.043253 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.043667 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.048071 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.048735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-kolla-config\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.058203 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b735ae88-5b0c-47df-ac6a-a9dfac565b59-config-data-default\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.062314 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b735ae88-5b0c-47df-ac6a-a9dfac565b59-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.072993 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b735ae88-5b0c-47df-ac6a-a9dfac565b59-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.074737 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b735ae88-5b0c-47df-ac6a-a9dfac565b59-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.097352 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.118609 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxnc4\" (UniqueName: \"kubernetes.io/projected/b735ae88-5b0c-47df-ac6a-a9dfac565b59-kube-api-access-nxnc4\") pod \"openstack-galera-0\" (UID: \"b735ae88-5b0c-47df-ac6a-a9dfac565b59\") " pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.207221 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d87128e7-abb0-4dd7-9b9f-04a4393c2313","Type":"ContainerStarted","Data":"33417fe67ee8035ef94b4b38be48aedc00fe82866b924d9be15b477aa9b93d4b"} Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.225672 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.244079 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c266c06-8bfc-47ba-bab9-6ef36d6294e5","Type":"ContainerStarted","Data":"9d3f2eb31ffc3cb9f4e5addcd4bf23642796de12345e1f8f5fcc8823da201875"} Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.812314 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.815003 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.822788 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.823440 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.822791 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dds5z" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.828381 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.839929 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.986494 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.986662 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.986753 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.986908 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27ef0b17-0896-4405-97ae-c4145e9d388c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.987100 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.987167 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmtqg\" (UniqueName: \"kubernetes.io/projected/27ef0b17-0896-4405-97ae-c4145e9d388c-kube-api-access-xmtqg\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.988696 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ef0b17-0896-4405-97ae-c4145e9d388c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:42 crc kubenswrapper[4754]: I0218 19:35:42.988775 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ef0b17-0896-4405-97ae-c4145e9d388c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.018172 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092217 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ef0b17-0896-4405-97ae-c4145e9d388c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092297 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ef0b17-0896-4405-97ae-c4145e9d388c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092356 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092389 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092421 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092467 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27ef0b17-0896-4405-97ae-c4145e9d388c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092502 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.092524 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmtqg\" (UniqueName: \"kubernetes.io/projected/27ef0b17-0896-4405-97ae-c4145e9d388c-kube-api-access-xmtqg\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.095230 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.096526 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/27ef0b17-0896-4405-97ae-c4145e9d388c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.096759 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.097040 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.100264 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27ef0b17-0896-4405-97ae-c4145e9d388c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.111883 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27ef0b17-0896-4405-97ae-c4145e9d388c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.124970 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmtqg\" (UniqueName: \"kubernetes.io/projected/27ef0b17-0896-4405-97ae-c4145e9d388c-kube-api-access-xmtqg\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.129833 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/27ef0b17-0896-4405-97ae-c4145e9d388c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.144827 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"27ef0b17-0896-4405-97ae-c4145e9d388c\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.172114 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.174274 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.186113 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.186665 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.186822 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zlsj8" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.198933 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.265640 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b735ae88-5b0c-47df-ac6a-a9dfac565b59","Type":"ContainerStarted","Data":"72b9d32f2bf109f74532d5b6afb11d3f9f01987db8a2bce4dff08602f9b98d72"} Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.296560 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jsh\" (UniqueName: \"kubernetes.io/projected/830a61c8-fe23-4d5d-b661-bd21851cbdea-kube-api-access-h5jsh\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.296653 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/830a61c8-fe23-4d5d-b661-bd21851cbdea-kolla-config\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.296705 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/830a61c8-fe23-4d5d-b661-bd21851cbdea-memcached-tls-certs\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.296734 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830a61c8-fe23-4d5d-b661-bd21851cbdea-combined-ca-bundle\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.296771 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830a61c8-fe23-4d5d-b661-bd21851cbdea-config-data\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.398995 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jsh\" (UniqueName: \"kubernetes.io/projected/830a61c8-fe23-4d5d-b661-bd21851cbdea-kube-api-access-h5jsh\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.399071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/830a61c8-fe23-4d5d-b661-bd21851cbdea-kolla-config\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.399120 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/830a61c8-fe23-4d5d-b661-bd21851cbdea-memcached-tls-certs\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.399183 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830a61c8-fe23-4d5d-b661-bd21851cbdea-combined-ca-bundle\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.399231 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830a61c8-fe23-4d5d-b661-bd21851cbdea-config-data\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.400855 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/830a61c8-fe23-4d5d-b661-bd21851cbdea-kolla-config\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.401934 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/830a61c8-fe23-4d5d-b661-bd21851cbdea-config-data\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.406777 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830a61c8-fe23-4d5d-b661-bd21851cbdea-combined-ca-bundle\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.420473 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jsh\" (UniqueName: \"kubernetes.io/projected/830a61c8-fe23-4d5d-b661-bd21851cbdea-kube-api-access-h5jsh\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.434808 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/830a61c8-fe23-4d5d-b661-bd21851cbdea-memcached-tls-certs\") pod \"memcached-0\" (UID: \"830a61c8-fe23-4d5d-b661-bd21851cbdea\") " pod="openstack/memcached-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.441503 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:35:43 crc kubenswrapper[4754]: I0218 19:35:43.567590 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:35:44 crc kubenswrapper[4754]: I0218 19:35:44.135542 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:35:44 crc kubenswrapper[4754]: I0218 19:35:44.259543 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:35:44 crc kubenswrapper[4754]: I0218 19:35:44.288588 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27ef0b17-0896-4405-97ae-c4145e9d388c","Type":"ContainerStarted","Data":"45f32b8bb362baba5728cc0fae05903c70e1e475fbe4344880f5a660aa4a9d22"} Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.335305 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.337221 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.344774 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9k587" Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.359733 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"830a61c8-fe23-4d5d-b661-bd21851cbdea","Type":"ContainerStarted","Data":"456a7c034669954ec636a890ae7d07ab61b1f6ad354135fe0e86d20d29ab374b"} Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.361688 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.375979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnxs6\" (UniqueName: \"kubernetes.io/projected/f70d6e04-a01e-4213-83b3-b986177730f1-kube-api-access-qnxs6\") pod \"kube-state-metrics-0\" (UID: \"f70d6e04-a01e-4213-83b3-b986177730f1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.482873 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnxs6\" (UniqueName: \"kubernetes.io/projected/f70d6e04-a01e-4213-83b3-b986177730f1-kube-api-access-qnxs6\") pod \"kube-state-metrics-0\" (UID: \"f70d6e04-a01e-4213-83b3-b986177730f1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.535678 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnxs6\" (UniqueName: \"kubernetes.io/projected/f70d6e04-a01e-4213-83b3-b986177730f1-kube-api-access-qnxs6\") pod \"kube-state-metrics-0\" (UID: \"f70d6e04-a01e-4213-83b3-b986177730f1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:35:45 crc kubenswrapper[4754]: I0218 19:35:45.671364 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.482341 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:35:46 crc kubenswrapper[4754]: W0218 19:35:46.544814 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf70d6e04_a01e_4213_83b3_b986177730f1.slice/crio-5d2ada013616ea41ac629810bf801cf1c690e1cb2d02a80e92eba673abad8cdc WatchSource:0}: Error finding container 5d2ada013616ea41ac629810bf801cf1c690e1cb2d02a80e92eba673abad8cdc: Status 404 returned error can't find the container with id 5d2ada013616ea41ac629810bf801cf1c690e1cb2d02a80e92eba673abad8cdc Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.684887 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.688418 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.691077 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.691780 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.692013 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.692371 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.692731 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jgskl" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.692826 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.692846 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.701173 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.726538 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.823902 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b36185b7-72d3-4f98-9928-e1c4c27594fa-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.824438 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.824557 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-config\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825003 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825062 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825103 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825205 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825368 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwkg\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-kube-api-access-qmwkg\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825455 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.825592 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933016 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwkg\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-kube-api-access-qmwkg\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933072 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933119 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933179 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b36185b7-72d3-4f98-9928-e1c4c27594fa-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933213 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933239 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-config\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933293 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933322 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933342 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.933369 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.934179 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.934480 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.936996 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.943806 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b36185b7-72d3-4f98-9928-e1c4c27594fa-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.944805 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.944945 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.945114 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-config\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.947495 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.970265 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwkg\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-kube-api-access-qmwkg\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.978481 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:35:46 crc kubenswrapper[4754]: I0218 19:35:46.978583 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/81ceb4b204ea14622a6725e3273bbd9693392e71b58bdedf5b3f6ad4f339a7ba/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:47 crc kubenswrapper[4754]: I0218 19:35:47.071815 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:47 crc kubenswrapper[4754]: I0218 19:35:47.315526 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:35:47 crc kubenswrapper[4754]: I0218 19:35:47.466450 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f70d6e04-a01e-4213-83b3-b986177730f1","Type":"ContainerStarted","Data":"5d2ada013616ea41ac629810bf801cf1c690e1cb2d02a80e92eba673abad8cdc"} Feb 18 19:35:48 crc kubenswrapper[4754]: I0218 19:35:48.019274 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:35:48 crc kubenswrapper[4754]: W0218 19:35:48.295435 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb36185b7_72d3_4f98_9928_e1c4c27594fa.slice/crio-7971fd3605bd5c96ff4596ce9da6b9d90562338378cc199193e22388f3bcd13d WatchSource:0}: Error finding container 7971fd3605bd5c96ff4596ce9da6b9d90562338378cc199193e22388f3bcd13d: Status 404 returned error can't find the container with id 7971fd3605bd5c96ff4596ce9da6b9d90562338378cc199193e22388f3bcd13d Feb 18 19:35:48 crc kubenswrapper[4754]: I0218 19:35:48.514219 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerStarted","Data":"7971fd3605bd5c96ff4596ce9da6b9d90562338378cc199193e22388f3bcd13d"} Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.288032 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.290554 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.296857 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.296966 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.297370 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5cb92" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.297675 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.299316 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.301373 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.408900 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409018 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693eb041-c8e5-4196-80b5-c3aff2bdd232-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409238 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/693eb041-c8e5-4196-80b5-c3aff2bdd232-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409359 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr496\" (UniqueName: \"kubernetes.io/projected/693eb041-c8e5-4196-80b5-c3aff2bdd232-kube-api-access-rr496\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409427 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693eb041-c8e5-4196-80b5-c3aff2bdd232-config\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409674 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409833 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.409955 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.524430 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/693eb041-c8e5-4196-80b5-c3aff2bdd232-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.524738 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr496\" (UniqueName: \"kubernetes.io/projected/693eb041-c8e5-4196-80b5-c3aff2bdd232-kube-api-access-rr496\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.524816 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693eb041-c8e5-4196-80b5-c3aff2bdd232-config\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.524876 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.524950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.525002 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.525066 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.525129 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693eb041-c8e5-4196-80b5-c3aff2bdd232-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.525494 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/693eb041-c8e5-4196-80b5-c3aff2bdd232-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.526017 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.526927 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/693eb041-c8e5-4196-80b5-c3aff2bdd232-config\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.527014 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693eb041-c8e5-4196-80b5-c3aff2bdd232-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.544738 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.559003 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr496\" (UniqueName: \"kubernetes.io/projected/693eb041-c8e5-4196-80b5-c3aff2bdd232-kube-api-access-rr496\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.561826 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.575116 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.577638 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693eb041-c8e5-4196-80b5-c3aff2bdd232-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"693eb041-c8e5-4196-80b5-c3aff2bdd232\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.628478 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.776630 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9bcwf"] Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.778266 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.781491 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gs5q2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.785465 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.785727 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.798548 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pqtp2"] Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.800875 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.887532 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bcwf"] Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.919813 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pqtp2"] Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934398 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-lib\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934586 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-run-ovn\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934629 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-etc-ovs\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934655 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8cdad93-899c-45df-aba0-c680a947f021-scripts\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934694 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6txw\" (UniqueName: \"kubernetes.io/projected/fbe55796-4440-4ec9-b69b-1f897aeb6f28-kube-api-access-q6txw\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934743 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe55796-4440-4ec9-b69b-1f897aeb6f28-scripts\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.934920 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9n4m\" (UniqueName: \"kubernetes.io/projected/c8cdad93-899c-45df-aba0-c680a947f021-kube-api-access-j9n4m\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.935033 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cdad93-899c-45df-aba0-c680a947f021-ovn-controller-tls-certs\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.935109 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cdad93-899c-45df-aba0-c680a947f021-combined-ca-bundle\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.935291 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-log\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.935330 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-log-ovn\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.935357 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-run\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:49 crc kubenswrapper[4754]: I0218 19:35:49.935392 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-run\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.037698 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-run\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.037915 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-run\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038230 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-lib\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038366 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-run\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038412 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-run-ovn\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038512 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-etc-ovs\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038560 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8cdad93-899c-45df-aba0-c680a947f021-scripts\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038622 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6txw\" (UniqueName: \"kubernetes.io/projected/fbe55796-4440-4ec9-b69b-1f897aeb6f28-kube-api-access-q6txw\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038629 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-lib\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038660 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe55796-4440-4ec9-b69b-1f897aeb6f28-scripts\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038780 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9n4m\" (UniqueName: \"kubernetes.io/projected/c8cdad93-899c-45df-aba0-c680a947f021-kube-api-access-j9n4m\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038839 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cdad93-899c-45df-aba0-c680a947f021-ovn-controller-tls-certs\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038893 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-run-ovn\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.038903 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cdad93-899c-45df-aba0-c680a947f021-combined-ca-bundle\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.042309 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-log\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.042333 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-log-ovn\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.043756 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-var-log\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.039470 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fbe55796-4440-4ec9-b69b-1f897aeb6f28-etc-ovs\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.040670 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-run\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.044896 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8cdad93-899c-45df-aba0-c680a947f021-scripts\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.045123 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe55796-4440-4ec9-b69b-1f897aeb6f28-scripts\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.045176 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c8cdad93-899c-45df-aba0-c680a947f021-var-log-ovn\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.052636 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8cdad93-899c-45df-aba0-c680a947f021-ovn-controller-tls-certs\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.053603 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8cdad93-899c-45df-aba0-c680a947f021-combined-ca-bundle\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.061467 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9n4m\" (UniqueName: \"kubernetes.io/projected/c8cdad93-899c-45df-aba0-c680a947f021-kube-api-access-j9n4m\") pod \"ovn-controller-9bcwf\" (UID: \"c8cdad93-899c-45df-aba0-c680a947f021\") " pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.077363 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6txw\" (UniqueName: \"kubernetes.io/projected/fbe55796-4440-4ec9-b69b-1f897aeb6f28-kube-api-access-q6txw\") pod \"ovn-controller-ovs-pqtp2\" (UID: \"fbe55796-4440-4ec9-b69b-1f897aeb6f28\") " pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.187350 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bcwf" Feb 18 19:35:50 crc kubenswrapper[4754]: I0218 19:35:50.204247 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.633973 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.635802 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.638936 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bd9dg" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.639883 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.640175 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.640320 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.659873 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715003 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g64q\" (UniqueName: \"kubernetes.io/projected/92c1bca4-756a-4d14-b834-c46663c0c69b-kube-api-access-8g64q\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715073 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92c1bca4-756a-4d14-b834-c46663c0c69b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715121 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715184 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715214 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92c1bca4-756a-4d14-b834-c46663c0c69b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715232 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c1bca4-756a-4d14-b834-c46663c0c69b-config\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715264 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.715287 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816614 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g64q\" (UniqueName: \"kubernetes.io/projected/92c1bca4-756a-4d14-b834-c46663c0c69b-kube-api-access-8g64q\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816672 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92c1bca4-756a-4d14-b834-c46663c0c69b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816731 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816895 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816925 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92c1bca4-756a-4d14-b834-c46663c0c69b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816947 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c1bca4-756a-4d14-b834-c46663c0c69b-config\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.816980 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.817005 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.817393 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.817785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/92c1bca4-756a-4d14-b834-c46663c0c69b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.818039 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92c1bca4-756a-4d14-b834-c46663c0c69b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.818945 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92c1bca4-756a-4d14-b834-c46663c0c69b-config\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.827132 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.827132 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.827874 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c1bca4-756a-4d14-b834-c46663c0c69b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.839815 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g64q\" (UniqueName: \"kubernetes.io/projected/92c1bca4-756a-4d14-b834-c46663c0c69b-kube-api-access-8g64q\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.842368 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"92c1bca4-756a-4d14-b834-c46663c0c69b\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:35:52 crc kubenswrapper[4754]: I0218 19:35:52.960121 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:36:06 crc kubenswrapper[4754]: E0218 19:36:06.062371 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 18 19:36:06 crc kubenswrapper[4754]: E0218 19:36:06.063432 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmwkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(b36185b7-72d3-4f98-9928-e1c4c27594fa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:36:06 crc kubenswrapper[4754]: E0218 19:36:06.068308 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" Feb 18 19:36:06 crc kubenswrapper[4754]: E0218 19:36:06.723319 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" Feb 18 19:36:07 crc kubenswrapper[4754]: E0218 19:36:07.379029 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 19:36:07 crc kubenswrapper[4754]: E0218 19:36:07.379653 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6hfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(d87128e7-abb0-4dd7-9b9f-04a4393c2313): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:07 crc kubenswrapper[4754]: E0218 19:36:07.380954 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" Feb 18 19:36:07 crc kubenswrapper[4754]: E0218 19:36:07.731465 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" Feb 18 19:36:08 crc kubenswrapper[4754]: I0218 19:36:08.097006 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:36:08 crc kubenswrapper[4754]: I0218 19:36:08.097089 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.330183 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.330498 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxnc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(b735ae88-5b0c-47df-ac6a-a9dfac565b59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.331776 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="b735ae88-5b0c-47df-ac6a-a9dfac565b59" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.749042 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="b735ae88-5b0c-47df-ac6a-a9dfac565b59" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.984252 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.984571 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n94h5b7h5ffh57fh675h9dh5c9h58chbdh658hbfhf8h78h59bh648h548h5bdh554hbfh557h54chfch66bh554h7ch6dh99h648h7fhc5h5bfh74q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5jsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(830a61c8-fe23-4d5d-b661-bd21851cbdea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:09 crc kubenswrapper[4754]: E0218 19:36:09.985751 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="830a61c8-fe23-4d5d-b661-bd21851cbdea" Feb 18 19:36:10 crc kubenswrapper[4754]: E0218 19:36:10.761592 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="830a61c8-fe23-4d5d-b661-bd21851cbdea" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.524053 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.524849 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqgnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(3c266c06-8bfc-47ba-bab9-6ef36d6294e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.526199 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.566681 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.566997 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmtqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(27ef0b17-0896-4405-97ae-c4145e9d388c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.568265 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="27ef0b17-0896-4405-97ae-c4145e9d388c" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.805678 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="27ef0b17-0896-4405-97ae-c4145e9d388c" Feb 18 19:36:14 crc kubenswrapper[4754]: E0218 19:36:14.808437 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.496869 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.497695 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jq8lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-54ppg_openstack(bd6d05cb-564c-44d7-83c8-d3487e363533): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.499655 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" podUID="bd6d05cb-564c-44d7-83c8-d3487e363533" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.518851 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.519190 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-np8gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-c7dzm_openstack(f66fd58b-e940-4e0f-bf09-ceceaba2315f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.520396 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" podUID="f66fd58b-e940-4e0f-bf09-ceceaba2315f" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.537638 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.537921 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jszvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-l8295_openstack(b6756083-12c5-4e8a-a53f-bb5191a537ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.539343 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" podUID="b6756083-12c5-4e8a-a53f-bb5191a537ff" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.555369 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.555632 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9549,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-bsnzd_openstack(d3a2e916-fb78-41e4-966b-c0c613144506): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.556811 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" podUID="d3a2e916-fb78-41e4-966b-c0c613144506" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.803555 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" podUID="d3a2e916-fb78-41e4-966b-c0c613144506" Feb 18 19:36:15 crc kubenswrapper[4754]: E0218 19:36:15.804764 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" podUID="bd6d05cb-564c-44d7-83c8-d3487e363533" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.059697 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bcwf"] Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.356210 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.413406 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pqtp2"] Feb 18 19:36:16 crc kubenswrapper[4754]: E0218 19:36:16.607252 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 19:36:16 crc kubenswrapper[4754]: E0218 19:36:16.607302 4754 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 18 19:36:16 crc kubenswrapper[4754]: E0218 19:36:16.607450 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnxs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(f70d6e04-a01e-4213-83b3-b986177730f1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:36:16 crc kubenswrapper[4754]: E0218 19:36:16.608649 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.688056 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.694550 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.810497 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bcwf" event={"ID":"c8cdad93-899c-45df-aba0-c680a947f021","Type":"ContainerStarted","Data":"69cf7b36d16d960c84cbff5e4f6223ac2068fc0fdd99ae64255ffaed48fb1f77"} Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.812505 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" event={"ID":"b6756083-12c5-4e8a-a53f-bb5191a537ff","Type":"ContainerDied","Data":"f0fa33d7c42cb8f97cac96dbff6941307665fa189bf2378e1f99bdcff16da83b"} Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.812558 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-l8295" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.814099 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"92c1bca4-756a-4d14-b834-c46663c0c69b","Type":"ContainerStarted","Data":"b85ec02be4188071750ed62c5167290b42751a45c65d86e219d3f231da31734c"} Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.815520 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" event={"ID":"f66fd58b-e940-4e0f-bf09-ceceaba2315f","Type":"ContainerDied","Data":"4507a9f08a82259702fbd4102eb8f6f75121406226507a4c019bed708977f726"} Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.815603 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-c7dzm" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.815898 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jszvd\" (UniqueName: \"kubernetes.io/projected/b6756083-12c5-4e8a-a53f-bb5191a537ff-kube-api-access-jszvd\") pod \"b6756083-12c5-4e8a-a53f-bb5191a537ff\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.815968 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66fd58b-e940-4e0f-bf09-ceceaba2315f-config\") pod \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.816090 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8gw\" (UniqueName: \"kubernetes.io/projected/f66fd58b-e940-4e0f-bf09-ceceaba2315f-kube-api-access-np8gw\") pod \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\" (UID: \"f66fd58b-e940-4e0f-bf09-ceceaba2315f\") " Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.816181 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-config\") pod \"b6756083-12c5-4e8a-a53f-bb5191a537ff\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.816237 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-dns-svc\") pod \"b6756083-12c5-4e8a-a53f-bb5191a537ff\" (UID: \"b6756083-12c5-4e8a-a53f-bb5191a537ff\") " Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.817039 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f66fd58b-e940-4e0f-bf09-ceceaba2315f-config" (OuterVolumeSpecName: "config") pod "f66fd58b-e940-4e0f-bf09-ceceaba2315f" (UID: "f66fd58b-e940-4e0f-bf09-ceceaba2315f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.817166 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6756083-12c5-4e8a-a53f-bb5191a537ff" (UID: "b6756083-12c5-4e8a-a53f-bb5191a537ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.817311 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pqtp2" event={"ID":"fbe55796-4440-4ec9-b69b-1f897aeb6f28","Type":"ContainerStarted","Data":"b77c9f3f29bb63fae3fc5b922ce80dec7c38a4ad8f9da0c91bd14587d02b316a"} Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.817702 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-config" (OuterVolumeSpecName: "config") pod "b6756083-12c5-4e8a-a53f-bb5191a537ff" (UID: "b6756083-12c5-4e8a-a53f-bb5191a537ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:16 crc kubenswrapper[4754]: E0218 19:36:16.818710 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.829334 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66fd58b-e940-4e0f-bf09-ceceaba2315f-kube-api-access-np8gw" (OuterVolumeSpecName: "kube-api-access-np8gw") pod "f66fd58b-e940-4e0f-bf09-ceceaba2315f" (UID: "f66fd58b-e940-4e0f-bf09-ceceaba2315f"). InnerVolumeSpecName "kube-api-access-np8gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.829376 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6756083-12c5-4e8a-a53f-bb5191a537ff-kube-api-access-jszvd" (OuterVolumeSpecName: "kube-api-access-jszvd") pod "b6756083-12c5-4e8a-a53f-bb5191a537ff" (UID: "b6756083-12c5-4e8a-a53f-bb5191a537ff"). InnerVolumeSpecName "kube-api-access-jszvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.919057 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jszvd\" (UniqueName: \"kubernetes.io/projected/b6756083-12c5-4e8a-a53f-bb5191a537ff-kube-api-access-jszvd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.919108 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66fd58b-e940-4e0f-bf09-ceceaba2315f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.919124 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8gw\" (UniqueName: \"kubernetes.io/projected/f66fd58b-e940-4e0f-bf09-ceceaba2315f-kube-api-access-np8gw\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.919136 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:16 crc kubenswrapper[4754]: I0218 19:36:16.919163 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6756083-12c5-4e8a-a53f-bb5191a537ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:17 crc kubenswrapper[4754]: I0218 19:36:17.183160 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l8295"] Feb 18 19:36:17 crc kubenswrapper[4754]: I0218 19:36:17.191017 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-l8295"] Feb 18 19:36:17 crc kubenswrapper[4754]: I0218 19:36:17.228596 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c7dzm"] Feb 18 19:36:17 crc kubenswrapper[4754]: I0218 19:36:17.230613 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-c7dzm"] Feb 18 19:36:17 crc kubenswrapper[4754]: I0218 19:36:17.379745 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:36:17 crc kubenswrapper[4754]: I0218 19:36:17.827895 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"693eb041-c8e5-4196-80b5-c3aff2bdd232","Type":"ContainerStarted","Data":"63137706101cdbbc786bc449240d12040a30a3323be398d11c9da2a6c316f72d"} Feb 18 19:36:18 crc kubenswrapper[4754]: I0218 19:36:18.223857 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6756083-12c5-4e8a-a53f-bb5191a537ff" path="/var/lib/kubelet/pods/b6756083-12c5-4e8a-a53f-bb5191a537ff/volumes" Feb 18 19:36:18 crc kubenswrapper[4754]: I0218 19:36:18.224826 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66fd58b-e940-4e0f-bf09-ceceaba2315f" path="/var/lib/kubelet/pods/f66fd58b-e940-4e0f-bf09-ceceaba2315f/volumes" Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.858098 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"693eb041-c8e5-4196-80b5-c3aff2bdd232","Type":"ContainerStarted","Data":"c7b9ccf3af581a2780017eeac34bf677b6ca80bcd10a8053876c7c8007bcbab1"} Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.860322 4754 generic.go:334] "Generic (PLEG): container finished" podID="fbe55796-4440-4ec9-b69b-1f897aeb6f28" containerID="762312f29f7853c5a87de9fec69adce30ba8b5ba913a2c85a6867313f4f469f0" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.860419 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pqtp2" event={"ID":"fbe55796-4440-4ec9-b69b-1f897aeb6f28","Type":"ContainerDied","Data":"762312f29f7853c5a87de9fec69adce30ba8b5ba913a2c85a6867313f4f469f0"} Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.862937 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bcwf" event={"ID":"c8cdad93-899c-45df-aba0-c680a947f021","Type":"ContainerStarted","Data":"0c3728b9d159ef068bc53fd6ba83f145633893e367dbb8709305de921988ffa8"} Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.863438 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9bcwf" Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.864930 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"92c1bca4-756a-4d14-b834-c46663c0c69b","Type":"ContainerStarted","Data":"19c0e1a8f447ecf3db3ef08f83737eefc541db9437108702a718ecc11a8200c2"} Feb 18 19:36:20 crc kubenswrapper[4754]: I0218 19:36:20.924418 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9bcwf" podStartSLOduration=28.513956482 podStartE2EDuration="31.92439682s" podCreationTimestamp="2026-02-18 19:35:49 +0000 UTC" firstStartedPulling="2026-02-18 19:36:16.625368934 +0000 UTC m=+1079.075781720" lastFinishedPulling="2026-02-18 19:36:20.035809262 +0000 UTC m=+1082.486222058" observedRunningTime="2026-02-18 19:36:20.915475385 +0000 UTC m=+1083.365888191" watchObservedRunningTime="2026-02-18 19:36:20.92439682 +0000 UTC m=+1083.374809626" Feb 18 19:36:21 crc kubenswrapper[4754]: I0218 19:36:21.897113 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pqtp2" event={"ID":"fbe55796-4440-4ec9-b69b-1f897aeb6f28","Type":"ContainerStarted","Data":"8ade012686b0e1ae6448724e8995f27d2ea7e2cdabdd6f89e49682d64908ee9b"} Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.911406 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"693eb041-c8e5-4196-80b5-c3aff2bdd232","Type":"ContainerStarted","Data":"1d7be5662324e3f2e2d564da8470eb89118bfa17572585e9090c04a90c62fbbf"} Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.914791 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pqtp2" event={"ID":"fbe55796-4440-4ec9-b69b-1f897aeb6f28","Type":"ContainerStarted","Data":"148e2241f7a186256ebf23f42a9f662d36d5621bd5b17f1314e8e771f43220fd"} Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.915008 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.915067 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.917995 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"92c1bca4-756a-4d14-b834-c46663c0c69b","Type":"ContainerStarted","Data":"1115d493689ef47b67ed3313759177b2b64bbe872b30b6665358838e02a9c25e"} Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.939891 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.474054952 podStartE2EDuration="34.939865042s" podCreationTimestamp="2026-02-18 19:35:48 +0000 UTC" firstStartedPulling="2026-02-18 19:36:17.383708788 +0000 UTC m=+1079.834121584" lastFinishedPulling="2026-02-18 19:36:21.849518878 +0000 UTC m=+1084.299931674" observedRunningTime="2026-02-18 19:36:22.935080023 +0000 UTC m=+1085.385492829" watchObservedRunningTime="2026-02-18 19:36:22.939865042 +0000 UTC m=+1085.390277838" Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.960374 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.960499 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.963024 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.743717623 podStartE2EDuration="31.962991866s" podCreationTimestamp="2026-02-18 19:35:51 +0000 UTC" firstStartedPulling="2026-02-18 19:36:16.623057263 +0000 UTC m=+1079.073470059" lastFinishedPulling="2026-02-18 19:36:21.842331506 +0000 UTC m=+1084.292744302" observedRunningTime="2026-02-18 19:36:22.954853024 +0000 UTC m=+1085.405265830" watchObservedRunningTime="2026-02-18 19:36:22.962991866 +0000 UTC m=+1085.413404662" Feb 18 19:36:22 crc kubenswrapper[4754]: I0218 19:36:22.976051 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pqtp2" podStartSLOduration=30.562483937 podStartE2EDuration="33.976019419s" podCreationTimestamp="2026-02-18 19:35:49 +0000 UTC" firstStartedPulling="2026-02-18 19:36:16.615643284 +0000 UTC m=+1079.066056080" lastFinishedPulling="2026-02-18 19:36:20.029178766 +0000 UTC m=+1082.479591562" observedRunningTime="2026-02-18 19:36:22.975499683 +0000 UTC m=+1085.425912499" watchObservedRunningTime="2026-02-18 19:36:22.976019419 +0000 UTC m=+1085.426432215" Feb 18 19:36:23 crc kubenswrapper[4754]: I0218 19:36:23.931741 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d87128e7-abb0-4dd7-9b9f-04a4393c2313","Type":"ContainerStarted","Data":"8dba12de5efdeb76fc4a632f246c1d52ee9b8ba428ec33056f2bef2b2ac692e4"} Feb 18 19:36:23 crc kubenswrapper[4754]: I0218 19:36:23.934734 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"830a61c8-fe23-4d5d-b661-bd21851cbdea","Type":"ContainerStarted","Data":"42bad3137e12f152d98a5012473a2a3117ae90c59e1757f159cf9571c0ab1a83"} Feb 18 19:36:23 crc kubenswrapper[4754]: I0218 19:36:23.934973 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 19:36:23 crc kubenswrapper[4754]: I0218 19:36:23.937555 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b735ae88-5b0c-47df-ac6a-a9dfac565b59","Type":"ContainerStarted","Data":"1d206df0767dd369b00e1f169caa7ca7543ee4b83c1ad90e91c5c87065e14c6b"} Feb 18 19:36:23 crc kubenswrapper[4754]: I0218 19:36:23.941864 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerStarted","Data":"5a02a86502889e66b82b41c6c17ab028c2bfb4975fdc645fbe52fcbc950c3263"} Feb 18 19:36:23 crc kubenswrapper[4754]: I0218 19:36:23.983134 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.6600686119999999 podStartE2EDuration="40.983102219s" podCreationTimestamp="2026-02-18 19:35:43 +0000 UTC" firstStartedPulling="2026-02-18 19:35:44.310288453 +0000 UTC m=+1046.760701249" lastFinishedPulling="2026-02-18 19:36:23.63332205 +0000 UTC m=+1086.083734856" observedRunningTime="2026-02-18 19:36:23.98088758 +0000 UTC m=+1086.431300376" watchObservedRunningTime="2026-02-18 19:36:23.983102219 +0000 UTC m=+1086.433515015" Feb 18 19:36:24 crc kubenswrapper[4754]: I0218 19:36:24.629642 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 19:36:25 crc kubenswrapper[4754]: I0218 19:36:25.629391 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 19:36:25 crc kubenswrapper[4754]: I0218 19:36:25.703119 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.009301 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.030960 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.065982 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.263330 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bsnzd"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.298457 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-5q4qt"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.299852 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.306016 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.320700 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7s8mb"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.321964 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.328066 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.336829 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-5q4qt"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.346458 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7s8mb"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.428948 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429046 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-config\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429101 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429126 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-config\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429282 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-combined-ca-bundle\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-ovn-rundir\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429373 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-ovs-rundir\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429419 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcf4c\" (UniqueName: \"kubernetes.io/projected/bbabf775-b706-47d3-988a-c8fb00e723d8-kube-api-access-pcf4c\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429445 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkbs\" (UniqueName: \"kubernetes.io/projected/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-kube-api-access-rqkbs\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.429480 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.451300 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-54ppg"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.484391 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-t9npd"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.486785 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.493477 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.523014 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.524774 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534647 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534769 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534829 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-config\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534877 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-config\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534959 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-combined-ca-bundle\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.534998 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-ovn-rundir\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.535034 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-ovs-rundir\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.535066 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcf4c\" (UniqueName: \"kubernetes.io/projected/bbabf775-b706-47d3-988a-c8fb00e723d8-kube-api-access-pcf4c\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.535092 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkbs\" (UniqueName: \"kubernetes.io/projected/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-kube-api-access-rqkbs\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.535803 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-ovn-rundir\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.535859 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-ovs-rundir\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.536377 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-config\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.536439 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t9npd"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.536440 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-config\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.536811 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.536923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.540850 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.541021 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7cmb8" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.541381 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.541594 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.555250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-combined-ca-bundle\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.556579 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.574103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkbs\" (UniqueName: \"kubernetes.io/projected/ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2-kube-api-access-rqkbs\") pod \"ovn-controller-metrics-7s8mb\" (UID: \"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2\") " pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.588587 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.591020 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcf4c\" (UniqueName: \"kubernetes.io/projected/bbabf775-b706-47d3-988a-c8fb00e723d8-kube-api-access-pcf4c\") pod \"dnsmasq-dns-5bf47b49b7-5q4qt\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.629062 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.636704 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a1967ff-39ca-47f9-b25e-af150a2e567b-scripts\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.636828 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrd2\" (UniqueName: \"kubernetes.io/projected/3a1967ff-39ca-47f9-b25e-af150a2e567b-kube-api-access-jfrd2\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.636864 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-config\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.636969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.637036 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1967ff-39ca-47f9-b25e-af150a2e567b-config\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.637074 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a1967ff-39ca-47f9-b25e-af150a2e567b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.637791 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.637972 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.638042 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.638116 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.638169 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94qm\" (UniqueName: \"kubernetes.io/projected/153dc5a5-f545-4be6-b170-2e0f5b64939e-kube-api-access-l94qm\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.638194 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-dns-svc\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.652899 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7s8mb" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.748708 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a1967ff-39ca-47f9-b25e-af150a2e567b-scripts\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749110 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrd2\" (UniqueName: \"kubernetes.io/projected/3a1967ff-39ca-47f9-b25e-af150a2e567b-kube-api-access-jfrd2\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749164 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-config\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749211 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749259 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1967ff-39ca-47f9-b25e-af150a2e567b-config\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749291 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a1967ff-39ca-47f9-b25e-af150a2e567b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749316 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749382 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749406 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749433 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749455 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94qm\" (UniqueName: \"kubernetes.io/projected/153dc5a5-f545-4be6-b170-2e0f5b64939e-kube-api-access-l94qm\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.749487 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-dns-svc\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.750679 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-dns-svc\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.751034 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3a1967ff-39ca-47f9-b25e-af150a2e567b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.751910 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a1967ff-39ca-47f9-b25e-af150a2e567b-scripts\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.752921 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.754209 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.754250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-config\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.754483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1967ff-39ca-47f9-b25e-af150a2e567b-config\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.758980 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.759370 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.766009 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a1967ff-39ca-47f9-b25e-af150a2e567b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.804511 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94qm\" (UniqueName: \"kubernetes.io/projected/153dc5a5-f545-4be6-b170-2e0f5b64939e-kube-api-access-l94qm\") pod \"dnsmasq-dns-8554648995-t9npd\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.810902 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.811120 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrd2\" (UniqueName: \"kubernetes.io/projected/3a1967ff-39ca-47f9-b25e-af150a2e567b-kube-api-access-jfrd2\") pod \"ovn-northd-0\" (UID: \"3a1967ff-39ca-47f9-b25e-af150a2e567b\") " pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.954177 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:36:26 crc kubenswrapper[4754]: I0218 19:36:26.959429 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.018784 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" event={"ID":"bd6d05cb-564c-44d7-83c8-d3487e363533","Type":"ContainerDied","Data":"3f9d31c0bef58a76c713ea587e262967e5c283e26eca6f6ec1c4af853c24de79"} Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.018929 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-54ppg" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.162821 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq8lr\" (UniqueName: \"kubernetes.io/projected/bd6d05cb-564c-44d7-83c8-d3487e363533-kube-api-access-jq8lr\") pod \"bd6d05cb-564c-44d7-83c8-d3487e363533\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.163111 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-config\") pod \"bd6d05cb-564c-44d7-83c8-d3487e363533\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.163215 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-dns-svc\") pod \"bd6d05cb-564c-44d7-83c8-d3487e363533\" (UID: \"bd6d05cb-564c-44d7-83c8-d3487e363533\") " Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.165166 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-config" (OuterVolumeSpecName: "config") pod "bd6d05cb-564c-44d7-83c8-d3487e363533" (UID: "bd6d05cb-564c-44d7-83c8-d3487e363533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.165713 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd6d05cb-564c-44d7-83c8-d3487e363533" (UID: "bd6d05cb-564c-44d7-83c8-d3487e363533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.167590 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6d05cb-564c-44d7-83c8-d3487e363533-kube-api-access-jq8lr" (OuterVolumeSpecName: "kube-api-access-jq8lr") pod "bd6d05cb-564c-44d7-83c8-d3487e363533" (UID: "bd6d05cb-564c-44d7-83c8-d3487e363533"). InnerVolumeSpecName "kube-api-access-jq8lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.260507 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7s8mb"] Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.265645 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.265705 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq8lr\" (UniqueName: \"kubernetes.io/projected/bd6d05cb-564c-44d7-83c8-d3487e363533-kube-api-access-jq8lr\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.265723 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6d05cb-564c-44d7-83c8-d3487e363533-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.550648 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-54ppg"] Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.554439 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-54ppg"] Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.582616 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-5q4qt"] Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.723982 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t9npd"] Feb 18 19:36:27 crc kubenswrapper[4754]: I0218 19:36:27.889066 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:36:27 crc kubenswrapper[4754]: W0218 19:36:27.925042 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a1967ff_39ca_47f9_b25e_af150a2e567b.slice/crio-925ee4af3197953ff649cc121417168749f3c173b0e6533bab779c6031d68b36 WatchSource:0}: Error finding container 925ee4af3197953ff649cc121417168749f3c173b0e6533bab779c6031d68b36: Status 404 returned error can't find the container with id 925ee4af3197953ff649cc121417168749f3c173b0e6533bab779c6031d68b36 Feb 18 19:36:28 crc kubenswrapper[4754]: I0218 19:36:28.039656 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7s8mb" event={"ID":"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2","Type":"ContainerStarted","Data":"28691eea2dcaf5855521a725e09854332a78dafec57daf72e222cd7c662fb17d"} Feb 18 19:36:28 crc kubenswrapper[4754]: I0218 19:36:28.041480 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3a1967ff-39ca-47f9-b25e-af150a2e567b","Type":"ContainerStarted","Data":"925ee4af3197953ff649cc121417168749f3c173b0e6533bab779c6031d68b36"} Feb 18 19:36:28 crc kubenswrapper[4754]: I0218 19:36:28.042965 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t9npd" event={"ID":"153dc5a5-f545-4be6-b170-2e0f5b64939e","Type":"ContainerStarted","Data":"2c2398a8a2307fb70e29428ab6dcec0c81dd991c6e3b1df626e75adebc9093c1"} Feb 18 19:36:28 crc kubenswrapper[4754]: I0218 19:36:28.044585 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" event={"ID":"bbabf775-b706-47d3-988a-c8fb00e723d8","Type":"ContainerStarted","Data":"9e6a632a8f166e17a30312c3b5e65c6a72ef7039181f08095b026838079e68c8"} Feb 18 19:36:28 crc kubenswrapper[4754]: I0218 19:36:28.221375 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6d05cb-564c-44d7-83c8-d3487e363533" path="/var/lib/kubelet/pods/bd6d05cb-564c-44d7-83c8-d3487e363533/volumes" Feb 18 19:36:28 crc kubenswrapper[4754]: I0218 19:36:28.572374 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.068068 4754 generic.go:334] "Generic (PLEG): container finished" podID="d3a2e916-fb78-41e4-966b-c0c613144506" containerID="8f4e106b338521aa6a6703fce92ead178a31da9e79b12d3ce42ceef9844158cd" exitCode=0 Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.069934 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" event={"ID":"d3a2e916-fb78-41e4-966b-c0c613144506","Type":"ContainerDied","Data":"8f4e106b338521aa6a6703fce92ead178a31da9e79b12d3ce42ceef9844158cd"} Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.083657 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27ef0b17-0896-4405-97ae-c4145e9d388c","Type":"ContainerStarted","Data":"bc016a87321fdef93118b52917cd46bef270d1998a77f31c2f7bf18362b6281d"} Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.098750 4754 generic.go:334] "Generic (PLEG): container finished" podID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerID="bacc4c82f2f29acfd8baa51ea777c97250ee931c30e089cb662fa3041847538a" exitCode=0 Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.098874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" event={"ID":"bbabf775-b706-47d3-988a-c8fb00e723d8","Type":"ContainerDied","Data":"bacc4c82f2f29acfd8baa51ea777c97250ee931c30e089cb662fa3041847538a"} Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.114335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7s8mb" event={"ID":"ae780404-5ca8-4f7e-a5ec-f5f948e0a2f2","Type":"ContainerStarted","Data":"1e8177253492e18d20e8f0a376279c3ecded318550d2a512163afe2ee2c265bc"} Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.123755 4754 generic.go:334] "Generic (PLEG): container finished" podID="b735ae88-5b0c-47df-ac6a-a9dfac565b59" containerID="1d206df0767dd369b00e1f169caa7ca7543ee4b83c1ad90e91c5c87065e14c6b" exitCode=0 Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.123822 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b735ae88-5b0c-47df-ac6a-a9dfac565b59","Type":"ContainerDied","Data":"1d206df0767dd369b00e1f169caa7ca7543ee4b83c1ad90e91c5c87065e14c6b"} Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.126211 4754 generic.go:334] "Generic (PLEG): container finished" podID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerID="1bdeb72c06bb86ed53bb1ccdb23154b383f2665cf680bea9dfadd4349e920012" exitCode=0 Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.126434 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t9npd" event={"ID":"153dc5a5-f545-4be6-b170-2e0f5b64939e","Type":"ContainerDied","Data":"1bdeb72c06bb86ed53bb1ccdb23154b383f2665cf680bea9dfadd4349e920012"} Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.165132 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7s8mb" podStartSLOduration=3.16510859 podStartE2EDuration="3.16510859s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:29.155699489 +0000 UTC m=+1091.606112295" watchObservedRunningTime="2026-02-18 19:36:29.16510859 +0000 UTC m=+1091.615521376" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.503379 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.550073 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-config\") pod \"d3a2e916-fb78-41e4-966b-c0c613144506\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.550307 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-dns-svc\") pod \"d3a2e916-fb78-41e4-966b-c0c613144506\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.550415 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9549\" (UniqueName: \"kubernetes.io/projected/d3a2e916-fb78-41e4-966b-c0c613144506-kube-api-access-q9549\") pod \"d3a2e916-fb78-41e4-966b-c0c613144506\" (UID: \"d3a2e916-fb78-41e4-966b-c0c613144506\") " Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.559845 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a2e916-fb78-41e4-966b-c0c613144506-kube-api-access-q9549" (OuterVolumeSpecName: "kube-api-access-q9549") pod "d3a2e916-fb78-41e4-966b-c0c613144506" (UID: "d3a2e916-fb78-41e4-966b-c0c613144506"). InnerVolumeSpecName "kube-api-access-q9549". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.576975 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3a2e916-fb78-41e4-966b-c0c613144506" (UID: "d3a2e916-fb78-41e4-966b-c0c613144506"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.652679 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.652718 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9549\" (UniqueName: \"kubernetes.io/projected/d3a2e916-fb78-41e4-966b-c0c613144506-kube-api-access-q9549\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.781672 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-config" (OuterVolumeSpecName: "config") pod "d3a2e916-fb78-41e4-966b-c0c613144506" (UID: "d3a2e916-fb78-41e4-966b-c0c613144506"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4754]: I0218 19:36:29.856601 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a2e916-fb78-41e4-966b-c0c613144506-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.138459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t9npd" event={"ID":"153dc5a5-f545-4be6-b170-2e0f5b64939e","Type":"ContainerStarted","Data":"5d6f0e87fb0a12d361c39278d29c306922378ef236dd580ef52ee9168d2ddccc"} Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.138636 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.141459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" event={"ID":"d3a2e916-fb78-41e4-966b-c0c613144506","Type":"ContainerDied","Data":"90c7a185ddfb44dcde401471e03a5cfe071f7e41cd597ee765b3e881a61e0c6c"} Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.141527 4754 scope.go:117] "RemoveContainer" containerID="8f4e106b338521aa6a6703fce92ead178a31da9e79b12d3ce42ceef9844158cd" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.141609 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bsnzd" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.143948 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" event={"ID":"bbabf775-b706-47d3-988a-c8fb00e723d8","Type":"ContainerStarted","Data":"da306bb50d0090c789dc3d7e63f4ee379d3f915c5450f9c5f8608a5ec772ab6a"} Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.144120 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.146287 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f70d6e04-a01e-4213-83b3-b986177730f1","Type":"ContainerStarted","Data":"22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6"} Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.146535 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.148790 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b735ae88-5b0c-47df-ac6a-a9dfac565b59","Type":"ContainerStarted","Data":"44ee4aeac94c1224438bff34ea8b6d214a2660a822c6590f588038531fe208e5"} Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.172707 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-t9npd" podStartSLOduration=4.172685425 podStartE2EDuration="4.172685425s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:30.168114125 +0000 UTC m=+1092.618526941" watchObservedRunningTime="2026-02-18 19:36:30.172685425 +0000 UTC m=+1092.623098221" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.199692 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.621566939000001 podStartE2EDuration="50.199668509s" podCreationTimestamp="2026-02-18 19:35:40 +0000 UTC" firstStartedPulling="2026-02-18 19:35:43.055840949 +0000 UTC m=+1045.506253745" lastFinishedPulling="2026-02-18 19:36:23.633942469 +0000 UTC m=+1086.084355315" observedRunningTime="2026-02-18 19:36:30.197407519 +0000 UTC m=+1092.647820325" watchObservedRunningTime="2026-02-18 19:36:30.199668509 +0000 UTC m=+1092.650081305" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.230487 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.844850645 podStartE2EDuration="45.23045961s" podCreationTimestamp="2026-02-18 19:35:45 +0000 UTC" firstStartedPulling="2026-02-18 19:35:46.5570432 +0000 UTC m=+1049.007455996" lastFinishedPulling="2026-02-18 19:36:28.942652165 +0000 UTC m=+1091.393064961" observedRunningTime="2026-02-18 19:36:30.218258624 +0000 UTC m=+1092.668671420" watchObservedRunningTime="2026-02-18 19:36:30.23045961 +0000 UTC m=+1092.680872406" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.241161 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" podStartSLOduration=4.241121321 podStartE2EDuration="4.241121321s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:30.239407257 +0000 UTC m=+1092.689820053" watchObservedRunningTime="2026-02-18 19:36:30.241121321 +0000 UTC m=+1092.691534117" Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.308513 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bsnzd"] Feb 18 19:36:30 crc kubenswrapper[4754]: I0218 19:36:30.317704 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bsnzd"] Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.161373 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3a1967ff-39ca-47f9-b25e-af150a2e567b","Type":"ContainerStarted","Data":"5d482546c3e0ceea197d3f1336987d6e2816578f365dae6d4d5736706f15562f"} Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.161759 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.161775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3a1967ff-39ca-47f9-b25e-af150a2e567b","Type":"ContainerStarted","Data":"6640008fffb4a5e7532c85ab69ea111f0b00baef3e3371de68de58cb1bb35ed0"} Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.163475 4754 generic.go:334] "Generic (PLEG): container finished" podID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerID="5a02a86502889e66b82b41c6c17ab028c2bfb4975fdc645fbe52fcbc950c3263" exitCode=0 Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.163550 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerDied","Data":"5a02a86502889e66b82b41c6c17ab028c2bfb4975fdc645fbe52fcbc950c3263"} Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.167529 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c266c06-8bfc-47ba-bab9-6ef36d6294e5","Type":"ContainerStarted","Data":"8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693"} Feb 18 19:36:31 crc kubenswrapper[4754]: I0218 19:36:31.195300 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.868831085 podStartE2EDuration="5.195273715s" podCreationTimestamp="2026-02-18 19:36:26 +0000 UTC" firstStartedPulling="2026-02-18 19:36:27.928038363 +0000 UTC m=+1090.378451159" lastFinishedPulling="2026-02-18 19:36:30.254480993 +0000 UTC m=+1092.704893789" observedRunningTime="2026-02-18 19:36:31.186322138 +0000 UTC m=+1093.636734935" watchObservedRunningTime="2026-02-18 19:36:31.195273715 +0000 UTC m=+1093.645686511" Feb 18 19:36:32 crc kubenswrapper[4754]: I0218 19:36:32.220683 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a2e916-fb78-41e4-966b-c0c613144506" path="/var/lib/kubelet/pods/d3a2e916-fb78-41e4-966b-c0c613144506/volumes" Feb 18 19:36:32 crc kubenswrapper[4754]: I0218 19:36:32.226681 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 19:36:32 crc kubenswrapper[4754]: I0218 19:36:32.226751 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 19:36:33 crc kubenswrapper[4754]: I0218 19:36:33.187091 4754 generic.go:334] "Generic (PLEG): container finished" podID="27ef0b17-0896-4405-97ae-c4145e9d388c" containerID="bc016a87321fdef93118b52917cd46bef270d1998a77f31c2f7bf18362b6281d" exitCode=0 Feb 18 19:36:33 crc kubenswrapper[4754]: I0218 19:36:33.187208 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27ef0b17-0896-4405-97ae-c4145e9d388c","Type":"ContainerDied","Data":"bc016a87321fdef93118b52917cd46bef270d1998a77f31c2f7bf18362b6281d"} Feb 18 19:36:34 crc kubenswrapper[4754]: I0218 19:36:34.201680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"27ef0b17-0896-4405-97ae-c4145e9d388c","Type":"ContainerStarted","Data":"b75569c2b7383da8bfb547f51f58f94ea73e7040bedfffe14baa6ded13f7bca9"} Feb 18 19:36:34 crc kubenswrapper[4754]: I0218 19:36:34.226127 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371983.62868 podStartE2EDuration="53.226096222s" podCreationTimestamp="2026-02-18 19:35:41 +0000 UTC" firstStartedPulling="2026-02-18 19:35:44.183407372 +0000 UTC m=+1046.633820168" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:34.223663927 +0000 UTC m=+1096.674076743" watchObservedRunningTime="2026-02-18 19:36:34.226096222 +0000 UTC m=+1096.676509018" Feb 18 19:36:35 crc kubenswrapper[4754]: I0218 19:36:35.675437 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.072530 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-5q4qt"] Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.072825 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="dnsmasq-dns" containerID="cri-o://da306bb50d0090c789dc3d7e63f4ee379d3f915c5450f9c5f8608a5ec772ab6a" gracePeriod=10 Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.074316 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.151346 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bq2qj"] Feb 18 19:36:36 crc kubenswrapper[4754]: E0218 19:36:36.151888 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a2e916-fb78-41e4-966b-c0c613144506" containerName="init" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.151910 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a2e916-fb78-41e4-966b-c0c613144506" containerName="init" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.152179 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a2e916-fb78-41e4-966b-c0c613144506" containerName="init" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.153252 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.235646 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bq2qj"] Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.284400 4754 generic.go:334] "Generic (PLEG): container finished" podID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerID="da306bb50d0090c789dc3d7e63f4ee379d3f915c5450f9c5f8608a5ec772ab6a" exitCode=0 Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.284473 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" event={"ID":"bbabf775-b706-47d3-988a-c8fb00e723d8","Type":"ContainerDied","Data":"da306bb50d0090c789dc3d7e63f4ee379d3f915c5450f9c5f8608a5ec772ab6a"} Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.294533 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.294641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9lss\" (UniqueName: \"kubernetes.io/projected/fba4ef7f-c55c-44e4-8213-6a900266eb2f-kube-api-access-p9lss\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.294705 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-config\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.294771 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.294813 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.396499 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.396580 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9lss\" (UniqueName: \"kubernetes.io/projected/fba4ef7f-c55c-44e4-8213-6a900266eb2f-kube-api-access-p9lss\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.396620 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-config\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.396658 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.396696 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.398124 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.398173 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.398326 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-config\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.404975 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.445995 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9lss\" (UniqueName: \"kubernetes.io/projected/fba4ef7f-c55c-44e4-8213-6a900266eb2f-kube-api-access-p9lss\") pod \"dnsmasq-dns-b8fbc5445-bq2qj\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.535898 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.631439 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 18 19:36:36 crc kubenswrapper[4754]: I0218 19:36:36.813453 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.245401 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.257661 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.261303 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hcq4l" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.261451 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.261540 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.266514 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.266550 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.418627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ecc731f-ea98-4469-be08-1a12088339b5-cache\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.418999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.419085 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ecc731f-ea98-4469-be08-1a12088339b5-lock\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.419122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.419165 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecc731f-ea98-4469-be08-1a12088339b5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.419206 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xsz\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-kube-api-access-b8xsz\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521181 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xsz\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-kube-api-access-b8xsz\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521273 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ecc731f-ea98-4469-be08-1a12088339b5-cache\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521315 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521384 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ecc731f-ea98-4469-be08-1a12088339b5-lock\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521417 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521434 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecc731f-ea98-4469-be08-1a12088339b5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: E0218 19:36:37.521631 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:36:37 crc kubenswrapper[4754]: E0218 19:36:37.521656 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:36:37 crc kubenswrapper[4754]: E0218 19:36:37.521729 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift podName:8ecc731f-ea98-4469-be08-1a12088339b5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:38.021701681 +0000 UTC m=+1100.472114477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift") pod "swift-storage-0" (UID: "8ecc731f-ea98-4469-be08-1a12088339b5") : configmap "swift-ring-files" not found Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.521973 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8ecc731f-ea98-4469-be08-1a12088339b5-cache\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.522056 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.522097 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8ecc731f-ea98-4469-be08-1a12088339b5-lock\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.525833 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecc731f-ea98-4469-be08-1a12088339b5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.542779 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xsz\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-kube-api-access-b8xsz\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.552346 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.914132 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-k6j7c"] Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.917315 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.935887 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.935898 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.936202 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 19:36:37 crc kubenswrapper[4754]: I0218 19:36:37.994080 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-k6j7c"] Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.002844 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8lf58"] Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.004688 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.020258 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8lf58"] Feb 18 19:36:38 crc kubenswrapper[4754]: E0218 19:36:38.026820 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-lx2zz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-k6j7c" podUID="e1e5dca3-6296-4f45-8c5c-6f65981b2323" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.029973 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-k6j7c"] Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.038839 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-swiftconf\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.038907 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-scripts\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.038940 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-dispersionconf\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.038961 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-ring-data-devices\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.039012 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-combined-ca-bundle\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.039257 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e1e5dca3-6296-4f45-8c5c-6f65981b2323-etc-swift\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.039538 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2zz\" (UniqueName: \"kubernetes.io/projected/e1e5dca3-6296-4f45-8c5c-6f65981b2323-kube-api-access-lx2zz\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.039623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:38 crc kubenswrapper[4754]: E0218 19:36:38.039819 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:36:38 crc kubenswrapper[4754]: E0218 19:36:38.039840 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:36:38 crc kubenswrapper[4754]: E0218 19:36:38.039906 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift podName:8ecc731f-ea98-4469-be08-1a12088339b5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:39.039882233 +0000 UTC m=+1101.490295029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift") pod "swift-storage-0" (UID: "8ecc731f-ea98-4469-be08-1a12088339b5") : configmap "swift-ring-files" not found Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.097133 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.097253 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194190 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e1e5dca3-6296-4f45-8c5c-6f65981b2323-etc-swift\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194291 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2zz\" (UniqueName: \"kubernetes.io/projected/e1e5dca3-6296-4f45-8c5c-6f65981b2323-kube-api-access-lx2zz\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194358 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-ring-data-devices\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194390 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-swiftconf\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194417 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-scripts\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194449 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-swiftconf\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194489 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-scripts\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194522 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-combined-ca-bundle\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194551 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvs6w\" (UniqueName: \"kubernetes.io/projected/da8323ff-9b88-4d7f-b400-3425c16e92d0-kube-api-access-wvs6w\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194580 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-dispersionconf\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194613 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-ring-data-devices\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194656 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da8323ff-9b88-4d7f-b400-3425c16e92d0-etc-swift\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194686 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-dispersionconf\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.194719 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-combined-ca-bundle\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.196868 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-scripts\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.197157 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e1e5dca3-6296-4f45-8c5c-6f65981b2323-etc-swift\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.199416 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-combined-ca-bundle\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.201502 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-swiftconf\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.202037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-ring-data-devices\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.204822 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-dispersionconf\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.228059 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2zz\" (UniqueName: \"kubernetes.io/projected/e1e5dca3-6296-4f45-8c5c-6f65981b2323-kube-api-access-lx2zz\") pod \"swift-ring-rebalance-k6j7c\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296587 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-ring-data-devices\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296643 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-swiftconf\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296664 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-scripts\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296698 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-combined-ca-bundle\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvs6w\" (UniqueName: \"kubernetes.io/projected/da8323ff-9b88-4d7f-b400-3425c16e92d0-kube-api-access-wvs6w\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296758 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da8323ff-9b88-4d7f-b400-3425c16e92d0-etc-swift\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.296785 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-dispersionconf\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.300446 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-dispersionconf\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.301710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-ring-data-devices\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.305214 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-swiftconf\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.305672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-scripts\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.307955 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-combined-ca-bundle\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.308572 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da8323ff-9b88-4d7f-b400-3425c16e92d0-etc-swift\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.367442 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvs6w\" (UniqueName: \"kubernetes.io/projected/da8323ff-9b88-4d7f-b400-3425c16e92d0-kube-api-access-wvs6w\") pod \"swift-ring-rebalance-8lf58\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.579413 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.593362 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.614526 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.705799 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-swiftconf\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706056 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-ring-data-devices\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706228 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx2zz\" (UniqueName: \"kubernetes.io/projected/e1e5dca3-6296-4f45-8c5c-6f65981b2323-kube-api-access-lx2zz\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706257 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e1e5dca3-6296-4f45-8c5c-6f65981b2323-etc-swift\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706369 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-combined-ca-bundle\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706495 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-dispersionconf\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706606 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-scripts\") pod \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\" (UID: \"e1e5dca3-6296-4f45-8c5c-6f65981b2323\") " Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706682 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e5dca3-6296-4f45-8c5c-6f65981b2323-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.706723 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.707169 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-scripts" (OuterVolumeSpecName: "scripts") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.707638 4754 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e1e5dca3-6296-4f45-8c5c-6f65981b2323-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.707654 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.707665 4754 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e1e5dca3-6296-4f45-8c5c-6f65981b2323-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.711375 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e5dca3-6296-4f45-8c5c-6f65981b2323-kube-api-access-lx2zz" (OuterVolumeSpecName: "kube-api-access-lx2zz") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "kube-api-access-lx2zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.715378 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.731555 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.732078 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e1e5dca3-6296-4f45-8c5c-6f65981b2323" (UID: "e1e5dca3-6296-4f45-8c5c-6f65981b2323"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.810064 4754 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.810103 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx2zz\" (UniqueName: \"kubernetes.io/projected/e1e5dca3-6296-4f45-8c5c-6f65981b2323-kube-api-access-lx2zz\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.810118 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:38 crc kubenswrapper[4754]: I0218 19:36:38.810130 4754 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e1e5dca3-6296-4f45-8c5c-6f65981b2323-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:39 crc kubenswrapper[4754]: I0218 19:36:39.116857 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:39 crc kubenswrapper[4754]: E0218 19:36:39.117234 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:36:39 crc kubenswrapper[4754]: E0218 19:36:39.117282 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:36:39 crc kubenswrapper[4754]: E0218 19:36:39.117372 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift podName:8ecc731f-ea98-4469-be08-1a12088339b5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:41.117344274 +0000 UTC m=+1103.567757060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift") pod "swift-storage-0" (UID: "8ecc731f-ea98-4469-be08-1a12088339b5") : configmap "swift-ring-files" not found Feb 18 19:36:39 crc kubenswrapper[4754]: I0218 19:36:39.602412 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-k6j7c" Feb 18 19:36:39 crc kubenswrapper[4754]: I0218 19:36:39.657215 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-k6j7c"] Feb 18 19:36:39 crc kubenswrapper[4754]: I0218 19:36:39.665111 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-k6j7c"] Feb 18 19:36:40 crc kubenswrapper[4754]: I0218 19:36:40.221127 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e5dca3-6296-4f45-8c5c-6f65981b2323" path="/var/lib/kubelet/pods/e1e5dca3-6296-4f45-8c5c-6f65981b2323/volumes" Feb 18 19:36:40 crc kubenswrapper[4754]: I0218 19:36:40.875834 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8lf58"] Feb 18 19:36:40 crc kubenswrapper[4754]: I0218 19:36:40.883533 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bq2qj"] Feb 18 19:36:40 crc kubenswrapper[4754]: W0218 19:36:40.928106 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda8323ff_9b88_4d7f_b400_3425c16e92d0.slice/crio-efbc54918d6cb58bff581f83e617089c9667d824d9b9d685ea2c07bc955c0ddd WatchSource:0}: Error finding container efbc54918d6cb58bff581f83e617089c9667d824d9b9d685ea2c07bc955c0ddd: Status 404 returned error can't find the container with id efbc54918d6cb58bff581f83e617089c9667d824d9b9d685ea2c07bc955c0ddd Feb 18 19:36:40 crc kubenswrapper[4754]: W0218 19:36:40.928930 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfba4ef7f_c55c_44e4_8213_6a900266eb2f.slice/crio-d5ad1f872c18317c8c0628ff7b85ca58db72a18e449d0fe39ea041513aa3eaca WatchSource:0}: Error finding container d5ad1f872c18317c8c0628ff7b85ca58db72a18e449d0fe39ea041513aa3eaca: Status 404 returned error can't find the container with id d5ad1f872c18317c8c0628ff7b85ca58db72a18e449d0fe39ea041513aa3eaca Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.081934 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.159024 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-config\") pod \"bbabf775-b706-47d3-988a-c8fb00e723d8\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.159220 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-dns-svc\") pod \"bbabf775-b706-47d3-988a-c8fb00e723d8\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.159356 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcf4c\" (UniqueName: \"kubernetes.io/projected/bbabf775-b706-47d3-988a-c8fb00e723d8-kube-api-access-pcf4c\") pod \"bbabf775-b706-47d3-988a-c8fb00e723d8\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.159439 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-ovsdbserver-nb\") pod \"bbabf775-b706-47d3-988a-c8fb00e723d8\" (UID: \"bbabf775-b706-47d3-988a-c8fb00e723d8\") " Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.159971 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:41 crc kubenswrapper[4754]: E0218 19:36:41.160316 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:36:41 crc kubenswrapper[4754]: E0218 19:36:41.160349 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:36:41 crc kubenswrapper[4754]: E0218 19:36:41.160438 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift podName:8ecc731f-ea98-4469-be08-1a12088339b5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:45.160411001 +0000 UTC m=+1107.610823797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift") pod "swift-storage-0" (UID: "8ecc731f-ea98-4469-be08-1a12088339b5") : configmap "swift-ring-files" not found Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.164428 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbabf775-b706-47d3-988a-c8fb00e723d8-kube-api-access-pcf4c" (OuterVolumeSpecName: "kube-api-access-pcf4c") pod "bbabf775-b706-47d3-988a-c8fb00e723d8" (UID: "bbabf775-b706-47d3-988a-c8fb00e723d8"). InnerVolumeSpecName "kube-api-access-pcf4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.202599 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-config" (OuterVolumeSpecName: "config") pod "bbabf775-b706-47d3-988a-c8fb00e723d8" (UID: "bbabf775-b706-47d3-988a-c8fb00e723d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.211121 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbabf775-b706-47d3-988a-c8fb00e723d8" (UID: "bbabf775-b706-47d3-988a-c8fb00e723d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.213712 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbabf775-b706-47d3-988a-c8fb00e723d8" (UID: "bbabf775-b706-47d3-988a-c8fb00e723d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.262075 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.262110 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcf4c\" (UniqueName: \"kubernetes.io/projected/bbabf775-b706-47d3-988a-c8fb00e723d8-kube-api-access-pcf4c\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.262122 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.262134 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbabf775-b706-47d3-988a-c8fb00e723d8-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.623344 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" event={"ID":"bbabf775-b706-47d3-988a-c8fb00e723d8","Type":"ContainerDied","Data":"9e6a632a8f166e17a30312c3b5e65c6a72ef7039181f08095b026838079e68c8"} Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.623427 4754 scope.go:117] "RemoveContainer" containerID="da306bb50d0090c789dc3d7e63f4ee379d3f915c5450f9c5f8608a5ec772ab6a" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.623386 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-5q4qt" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.626461 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" event={"ID":"fba4ef7f-c55c-44e4-8213-6a900266eb2f","Type":"ContainerStarted","Data":"109c4394d1987aa937840dd202ea7773008c95aca9332e896bee657bd553b7b0"} Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.626514 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" event={"ID":"fba4ef7f-c55c-44e4-8213-6a900266eb2f","Type":"ContainerStarted","Data":"d5ad1f872c18317c8c0628ff7b85ca58db72a18e449d0fe39ea041513aa3eaca"} Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.628088 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8lf58" event={"ID":"da8323ff-9b88-4d7f-b400-3425c16e92d0","Type":"ContainerStarted","Data":"efbc54918d6cb58bff581f83e617089c9667d824d9b9d685ea2c07bc955c0ddd"} Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.654629 4754 scope.go:117] "RemoveContainer" containerID="bacc4c82f2f29acfd8baa51ea777c97250ee931c30e089cb662fa3041847538a" Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.696965 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-5q4qt"] Feb 18 19:36:41 crc kubenswrapper[4754]: I0218 19:36:41.705200 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-5q4qt"] Feb 18 19:36:42 crc kubenswrapper[4754]: I0218 19:36:42.221511 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" path="/var/lib/kubelet/pods/bbabf775-b706-47d3-988a-c8fb00e723d8/volumes" Feb 18 19:36:42 crc kubenswrapper[4754]: I0218 19:36:42.641946 4754 generic.go:334] "Generic (PLEG): container finished" podID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerID="109c4394d1987aa937840dd202ea7773008c95aca9332e896bee657bd553b7b0" exitCode=0 Feb 18 19:36:42 crc kubenswrapper[4754]: I0218 19:36:42.642014 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" event={"ID":"fba4ef7f-c55c-44e4-8213-6a900266eb2f","Type":"ContainerDied","Data":"109c4394d1987aa937840dd202ea7773008c95aca9332e896bee657bd553b7b0"} Feb 18 19:36:43 crc kubenswrapper[4754]: I0218 19:36:43.442559 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 19:36:43 crc kubenswrapper[4754]: I0218 19:36:43.443011 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 19:36:43 crc kubenswrapper[4754]: I0218 19:36:43.655765 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" event={"ID":"fba4ef7f-c55c-44e4-8213-6a900266eb2f","Type":"ContainerStarted","Data":"77c1e757de7b8d03f44bccf28e28400b6dacf6fd2115993599d8dd41011789e1"} Feb 18 19:36:44 crc kubenswrapper[4754]: I0218 19:36:44.664019 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:44 crc kubenswrapper[4754]: I0218 19:36:44.694234 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" podStartSLOduration=8.69420775 podStartE2EDuration="8.69420775s" podCreationTimestamp="2026-02-18 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:44.687208523 +0000 UTC m=+1107.137621329" watchObservedRunningTime="2026-02-18 19:36:44.69420775 +0000 UTC m=+1107.144620546" Feb 18 19:36:45 crc kubenswrapper[4754]: I0218 19:36:45.245279 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:45 crc kubenswrapper[4754]: E0218 19:36:45.245441 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:36:45 crc kubenswrapper[4754]: E0218 19:36:45.245471 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:36:45 crc kubenswrapper[4754]: E0218 19:36:45.245528 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift podName:8ecc731f-ea98-4469-be08-1a12088339b5 nodeName:}" failed. No retries permitted until 2026-02-18 19:36:53.245507033 +0000 UTC m=+1115.695919829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift") pod "swift-storage-0" (UID: "8ecc731f-ea98-4469-be08-1a12088339b5") : configmap "swift-ring-files" not found Feb 18 19:36:45 crc kubenswrapper[4754]: I0218 19:36:45.561871 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 19:36:45 crc kubenswrapper[4754]: I0218 19:36:45.651796 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 19:36:47 crc kubenswrapper[4754]: I0218 19:36:47.017520 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.863112 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-snqpk"] Feb 18 19:36:48 crc kubenswrapper[4754]: E0218 19:36:48.869718 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="init" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.869745 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="init" Feb 18 19:36:48 crc kubenswrapper[4754]: E0218 19:36:48.869783 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="dnsmasq-dns" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.869791 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="dnsmasq-dns" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.870048 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbabf775-b706-47d3-988a-c8fb00e723d8" containerName="dnsmasq-dns" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.870858 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snqpk" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.876492 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9bc3-account-create-update-hrjdd"] Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.877828 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.880889 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.884914 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-snqpk"] Feb 18 19:36:48 crc kubenswrapper[4754]: I0218 19:36:48.926182 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9bc3-account-create-update-hrjdd"] Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.028056 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616676f-6b15-4f24-aa3f-5d88ad180239-operator-scripts\") pod \"glance-9bc3-account-create-update-hrjdd\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.028254 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gwfm\" (UniqueName: \"kubernetes.io/projected/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-kube-api-access-2gwfm\") pod \"glance-db-create-snqpk\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.028683 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ntx\" (UniqueName: \"kubernetes.io/projected/5616676f-6b15-4f24-aa3f-5d88ad180239-kube-api-access-x5ntx\") pod \"glance-9bc3-account-create-update-hrjdd\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.028829 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-operator-scripts\") pod \"glance-db-create-snqpk\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.130498 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616676f-6b15-4f24-aa3f-5d88ad180239-operator-scripts\") pod \"glance-9bc3-account-create-update-hrjdd\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.130582 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gwfm\" (UniqueName: \"kubernetes.io/projected/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-kube-api-access-2gwfm\") pod \"glance-db-create-snqpk\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.130665 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ntx\" (UniqueName: \"kubernetes.io/projected/5616676f-6b15-4f24-aa3f-5d88ad180239-kube-api-access-x5ntx\") pod \"glance-9bc3-account-create-update-hrjdd\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.130705 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-operator-scripts\") pod \"glance-db-create-snqpk\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.132969 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-operator-scripts\") pod \"glance-db-create-snqpk\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.133546 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616676f-6b15-4f24-aa3f-5d88ad180239-operator-scripts\") pod \"glance-9bc3-account-create-update-hrjdd\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.154606 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gwfm\" (UniqueName: \"kubernetes.io/projected/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-kube-api-access-2gwfm\") pod \"glance-db-create-snqpk\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.154617 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ntx\" (UniqueName: \"kubernetes.io/projected/5616676f-6b15-4f24-aa3f-5d88ad180239-kube-api-access-x5ntx\") pod \"glance-9bc3-account-create-update-hrjdd\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.203004 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snqpk" Feb 18 19:36:49 crc kubenswrapper[4754]: I0218 19:36:49.214724 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.229582 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9bcwf" podUID="c8cdad93-899c-45df-aba0-c680a947f021" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:36:50 crc kubenswrapper[4754]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:36:50 crc kubenswrapper[4754]: > Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.843722 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ltst9"] Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.845872 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.850035 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.856915 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ltst9"] Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.982707 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsj9\" (UniqueName: \"kubernetes.io/projected/fb85f580-fcb7-43ef-ac52-078b28e014f7-kube-api-access-mqsj9\") pod \"root-account-create-update-ltst9\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:50 crc kubenswrapper[4754]: I0218 19:36:50.983229 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb85f580-fcb7-43ef-ac52-078b28e014f7-operator-scripts\") pod \"root-account-create-update-ltst9\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.084697 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsj9\" (UniqueName: \"kubernetes.io/projected/fb85f580-fcb7-43ef-ac52-078b28e014f7-kube-api-access-mqsj9\") pod \"root-account-create-update-ltst9\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.084816 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb85f580-fcb7-43ef-ac52-078b28e014f7-operator-scripts\") pod \"root-account-create-update-ltst9\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.085710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb85f580-fcb7-43ef-ac52-078b28e014f7-operator-scripts\") pod \"root-account-create-update-ltst9\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.108729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsj9\" (UniqueName: \"kubernetes.io/projected/fb85f580-fcb7-43ef-ac52-078b28e014f7-kube-api-access-mqsj9\") pod \"root-account-create-update-ltst9\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.179765 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9bc3-account-create-update-hrjdd"] Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.182318 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ltst9" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.183372 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.237027 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-snqpk"] Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.274357 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.537404 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.598892 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t9npd"] Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.599638 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-t9npd" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="dnsmasq-dns" containerID="cri-o://5d6f0e87fb0a12d361c39278d29c306922378ef236dd580ef52ee9168d2ddccc" gracePeriod=10 Feb 18 19:36:51 crc kubenswrapper[4754]: I0218 19:36:51.812896 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-t9npd" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Feb 18 19:36:52 crc kubenswrapper[4754]: I0218 19:36:52.759245 4754 generic.go:334] "Generic (PLEG): container finished" podID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerID="5d6f0e87fb0a12d361c39278d29c306922378ef236dd580ef52ee9168d2ddccc" exitCode=0 Feb 18 19:36:52 crc kubenswrapper[4754]: I0218 19:36:52.759305 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t9npd" event={"ID":"153dc5a5-f545-4be6-b170-2e0f5b64939e","Type":"ContainerDied","Data":"5d6f0e87fb0a12d361c39278d29c306922378ef236dd580ef52ee9168d2ddccc"} Feb 18 19:36:53 crc kubenswrapper[4754]: I0218 19:36:53.335176 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:36:53 crc kubenswrapper[4754]: E0218 19:36:53.335532 4754 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:36:53 crc kubenswrapper[4754]: E0218 19:36:53.335564 4754 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:36:53 crc kubenswrapper[4754]: E0218 19:36:53.335626 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift podName:8ecc731f-ea98-4469-be08-1a12088339b5 nodeName:}" failed. No retries permitted until 2026-02-18 19:37:09.335609644 +0000 UTC m=+1131.786022440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift") pod "swift-storage-0" (UID: "8ecc731f-ea98-4469-be08-1a12088339b5") : configmap "swift-ring-files" not found Feb 18 19:36:53 crc kubenswrapper[4754]: W0218 19:36:53.753013 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcdb2f7d_6cc1_4e32_a98e_9f1c1ce03ef4.slice/crio-6f8fe606f3d087fd79b4416d9517fa231ff3b5a66406cf0ec65184b38048d077 WatchSource:0}: Error finding container 6f8fe606f3d087fd79b4416d9517fa231ff3b5a66406cf0ec65184b38048d077: Status 404 returned error can't find the container with id 6f8fe606f3d087fd79b4416d9517fa231ff3b5a66406cf0ec65184b38048d077 Feb 18 19:36:53 crc kubenswrapper[4754]: I0218 19:36:53.776279 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snqpk" event={"ID":"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4","Type":"ContainerStarted","Data":"6f8fe606f3d087fd79b4416d9517fa231ff3b5a66406cf0ec65184b38048d077"} Feb 18 19:36:53 crc kubenswrapper[4754]: I0218 19:36:53.778066 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9bc3-account-create-update-hrjdd" event={"ID":"5616676f-6b15-4f24-aa3f-5d88ad180239","Type":"ContainerStarted","Data":"35264f947c12bc1132996cb32de1c694362d2429cb639a27a143f4a65c745715"} Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.385767 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dqt2f"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.387535 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.409198 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8883-account-create-update-zdkht"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.414049 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.416857 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.423203 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dqt2f"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.441986 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ltst9"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.457225 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8883-account-create-update-zdkht"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.466122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37bb31ad-8178-4924-ac57-6b8325e3cafa-operator-scripts\") pod \"keystone-db-create-dqt2f\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.466339 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/dedaac0d-ebad-497d-9b8c-b6ee470782f2-kube-api-access-jmn6d\") pod \"keystone-8883-account-create-update-zdkht\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.466406 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2p6x\" (UniqueName: \"kubernetes.io/projected/37bb31ad-8178-4924-ac57-6b8325e3cafa-kube-api-access-p2p6x\") pod \"keystone-db-create-dqt2f\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.466488 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedaac0d-ebad-497d-9b8c-b6ee470782f2-operator-scripts\") pod \"keystone-8883-account-create-update-zdkht\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.569589 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37bb31ad-8178-4924-ac57-6b8325e3cafa-operator-scripts\") pod \"keystone-db-create-dqt2f\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.570267 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/dedaac0d-ebad-497d-9b8c-b6ee470782f2-kube-api-access-jmn6d\") pod \"keystone-8883-account-create-update-zdkht\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.570323 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2p6x\" (UniqueName: \"kubernetes.io/projected/37bb31ad-8178-4924-ac57-6b8325e3cafa-kube-api-access-p2p6x\") pod \"keystone-db-create-dqt2f\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.570377 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedaac0d-ebad-497d-9b8c-b6ee470782f2-operator-scripts\") pod \"keystone-8883-account-create-update-zdkht\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.572088 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedaac0d-ebad-497d-9b8c-b6ee470782f2-operator-scripts\") pod \"keystone-8883-account-create-update-zdkht\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.572700 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37bb31ad-8178-4924-ac57-6b8325e3cafa-operator-scripts\") pod \"keystone-db-create-dqt2f\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.591604 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nkms6"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.593321 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.605305 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nkms6"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.608085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/dedaac0d-ebad-497d-9b8c-b6ee470782f2-kube-api-access-jmn6d\") pod \"keystone-8883-account-create-update-zdkht\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.608188 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2p6x\" (UniqueName: \"kubernetes.io/projected/37bb31ad-8178-4924-ac57-6b8325e3cafa-kube-api-access-p2p6x\") pod \"keystone-db-create-dqt2f\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.672057 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c661122-825e-4bfd-bdbb-b89b44361abb-operator-scripts\") pod \"placement-db-create-nkms6\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.672243 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjv5c\" (UniqueName: \"kubernetes.io/projected/4c661122-825e-4bfd-bdbb-b89b44361abb-kube-api-access-xjv5c\") pod \"placement-db-create-nkms6\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.707432 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dqt2f" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.731937 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.773600 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c661122-825e-4bfd-bdbb-b89b44361abb-operator-scripts\") pod \"placement-db-create-nkms6\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.773735 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjv5c\" (UniqueName: \"kubernetes.io/projected/4c661122-825e-4bfd-bdbb-b89b44361abb-kube-api-access-xjv5c\") pod \"placement-db-create-nkms6\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.774815 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c661122-825e-4bfd-bdbb-b89b44361abb-operator-scripts\") pod \"placement-db-create-nkms6\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.792558 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerStarted","Data":"8e1d7244df194c7c06cb685e0c720d145ede48a08d1e5df775ae2281d877c868"} Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.793651 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9bd7-account-create-update-hzjql"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.795071 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.797842 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.821006 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjv5c\" (UniqueName: \"kubernetes.io/projected/4c661122-825e-4bfd-bdbb-b89b44361abb-kube-api-access-xjv5c\") pod \"placement-db-create-nkms6\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " pod="openstack/placement-db-create-nkms6" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.843669 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9bd7-account-create-update-hzjql"] Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.875501 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr27n\" (UniqueName: \"kubernetes.io/projected/dd0e6457-f44a-4059-a466-328fde68deaa-kube-api-access-nr27n\") pod \"placement-9bd7-account-create-update-hzjql\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.875567 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0e6457-f44a-4059-a466-328fde68deaa-operator-scripts\") pod \"placement-9bd7-account-create-update-hzjql\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.977301 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr27n\" (UniqueName: \"kubernetes.io/projected/dd0e6457-f44a-4059-a466-328fde68deaa-kube-api-access-nr27n\") pod \"placement-9bd7-account-create-update-hzjql\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.977375 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0e6457-f44a-4059-a466-328fde68deaa-operator-scripts\") pod \"placement-9bd7-account-create-update-hzjql\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.978390 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0e6457-f44a-4059-a466-328fde68deaa-operator-scripts\") pod \"placement-9bd7-account-create-update-hzjql\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.994089 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr27n\" (UniqueName: \"kubernetes.io/projected/dd0e6457-f44a-4059-a466-328fde68deaa-kube-api-access-nr27n\") pod \"placement-9bd7-account-create-update-hzjql\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:54 crc kubenswrapper[4754]: I0218 19:36:54.997245 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nkms6" Feb 18 19:36:55 crc kubenswrapper[4754]: W0218 19:36:55.047045 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb85f580_fcb7_43ef_ac52_078b28e014f7.slice/crio-ba9a8b9f8637224fc6c5800483cb277959825747c90e9da8c4082b6136cbe45c WatchSource:0}: Error finding container ba9a8b9f8637224fc6c5800483cb277959825747c90e9da8c4082b6136cbe45c: Status 404 returned error can't find the container with id ba9a8b9f8637224fc6c5800483cb277959825747c90e9da8c4082b6136cbe45c Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.159540 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.177224 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.249847 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9bcwf" podUID="c8cdad93-899c-45df-aba0-c680a947f021" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:36:55 crc kubenswrapper[4754]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:36:55 crc kubenswrapper[4754]: > Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.253011 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.259130 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pqtp2" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.281233 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-dns-svc\") pod \"153dc5a5-f545-4be6-b170-2e0f5b64939e\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.281305 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-sb\") pod \"153dc5a5-f545-4be6-b170-2e0f5b64939e\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.281351 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-config\") pod \"153dc5a5-f545-4be6-b170-2e0f5b64939e\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.281371 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-nb\") pod \"153dc5a5-f545-4be6-b170-2e0f5b64939e\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.281443 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l94qm\" (UniqueName: \"kubernetes.io/projected/153dc5a5-f545-4be6-b170-2e0f5b64939e-kube-api-access-l94qm\") pod \"153dc5a5-f545-4be6-b170-2e0f5b64939e\" (UID: \"153dc5a5-f545-4be6-b170-2e0f5b64939e\") " Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.290458 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/153dc5a5-f545-4be6-b170-2e0f5b64939e-kube-api-access-l94qm" (OuterVolumeSpecName: "kube-api-access-l94qm") pod "153dc5a5-f545-4be6-b170-2e0f5b64939e" (UID: "153dc5a5-f545-4be6-b170-2e0f5b64939e"). InnerVolumeSpecName "kube-api-access-l94qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.337477 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "153dc5a5-f545-4be6-b170-2e0f5b64939e" (UID: "153dc5a5-f545-4be6-b170-2e0f5b64939e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.339805 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-config" (OuterVolumeSpecName: "config") pod "153dc5a5-f545-4be6-b170-2e0f5b64939e" (UID: "153dc5a5-f545-4be6-b170-2e0f5b64939e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.343925 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "153dc5a5-f545-4be6-b170-2e0f5b64939e" (UID: "153dc5a5-f545-4be6-b170-2e0f5b64939e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.358279 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "153dc5a5-f545-4be6-b170-2e0f5b64939e" (UID: "153dc5a5-f545-4be6-b170-2e0f5b64939e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.387799 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.387851 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.387866 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l94qm\" (UniqueName: \"kubernetes.io/projected/153dc5a5-f545-4be6-b170-2e0f5b64939e-kube-api-access-l94qm\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.387880 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.387894 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/153dc5a5-f545-4be6-b170-2e0f5b64939e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.495603 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9bcwf-config-qt7kh"] Feb 18 19:36:55 crc kubenswrapper[4754]: E0218 19:36:55.496657 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="init" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.496680 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="init" Feb 18 19:36:55 crc kubenswrapper[4754]: E0218 19:36:55.496692 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="dnsmasq-dns" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.496699 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="dnsmasq-dns" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.497107 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" containerName="dnsmasq-dns" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.503381 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.513664 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.553231 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bcwf-config-qt7kh"] Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.592644 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run-ovn\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.592805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-additional-scripts\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.592839 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxq4\" (UniqueName: \"kubernetes.io/projected/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-kube-api-access-fvxq4\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.592908 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.593190 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-log-ovn\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.593227 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-scripts\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.696595 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.697098 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-log-ovn\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.697123 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-scripts\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.697182 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run-ovn\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.697268 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-additional-scripts\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.697297 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxq4\" (UniqueName: \"kubernetes.io/projected/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-kube-api-access-fvxq4\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.697784 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-log-ovn\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.698412 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run-ovn\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.699845 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-scripts\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.699916 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.700510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-additional-scripts\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.786404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxq4\" (UniqueName: \"kubernetes.io/projected/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-kube-api-access-fvxq4\") pod \"ovn-controller-9bcwf-config-qt7kh\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.805677 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-t9npd" event={"ID":"153dc5a5-f545-4be6-b170-2e0f5b64939e","Type":"ContainerDied","Data":"2c2398a8a2307fb70e29428ab6dcec0c81dd991c6e3b1df626e75adebc9093c1"} Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.805709 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-t9npd" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.805763 4754 scope.go:117] "RemoveContainer" containerID="5d6f0e87fb0a12d361c39278d29c306922378ef236dd580ef52ee9168d2ddccc" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.807654 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snqpk" event={"ID":"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4","Type":"ContainerStarted","Data":"d0e224b8c48c819419055bd2162941d41d9355b9d49e6d30e1b2c161a7b2b00c"} Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.813020 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ltst9" event={"ID":"fb85f580-fcb7-43ef-ac52-078b28e014f7","Type":"ContainerStarted","Data":"ba9a8b9f8637224fc6c5800483cb277959825747c90e9da8c4082b6136cbe45c"} Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.871983 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nkms6"] Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.943439 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-7l2r8"] Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.944763 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.959857 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-7l2r8"] Feb 18 19:36:55 crc kubenswrapper[4754]: I0218 19:36:55.966652 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.009884 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ttpq\" (UniqueName: \"kubernetes.io/projected/abef9d48-efe9-4491-96cd-a1cd94fecfe1-kube-api-access-5ttpq\") pod \"watcher-db-create-7l2r8\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.014136 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abef9d48-efe9-4491-96cd-a1cd94fecfe1-operator-scripts\") pod \"watcher-db-create-7l2r8\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.011016 4754 scope.go:117] "RemoveContainer" containerID="1bdeb72c06bb86ed53bb1ccdb23154b383f2665cf680bea9dfadd4349e920012" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.016691 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t9npd"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.055756 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-t9npd"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.076596 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-2262-account-create-update-45hjg"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.081419 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.084388 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.098759 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-2262-account-create-update-45hjg"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.116389 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxr2\" (UniqueName: \"kubernetes.io/projected/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-kube-api-access-6lxr2\") pod \"watcher-2262-account-create-update-45hjg\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.116499 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ttpq\" (UniqueName: \"kubernetes.io/projected/abef9d48-efe9-4491-96cd-a1cd94fecfe1-kube-api-access-5ttpq\") pod \"watcher-db-create-7l2r8\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.116548 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-operator-scripts\") pod \"watcher-2262-account-create-update-45hjg\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.116587 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abef9d48-efe9-4491-96cd-a1cd94fecfe1-operator-scripts\") pod \"watcher-db-create-7l2r8\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.117466 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abef9d48-efe9-4491-96cd-a1cd94fecfe1-operator-scripts\") pod \"watcher-db-create-7l2r8\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.118000 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dqt2f"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.141910 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ttpq\" (UniqueName: \"kubernetes.io/projected/abef9d48-efe9-4491-96cd-a1cd94fecfe1-kube-api-access-5ttpq\") pod \"watcher-db-create-7l2r8\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.219521 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-operator-scripts\") pod \"watcher-2262-account-create-update-45hjg\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.224791 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxr2\" (UniqueName: \"kubernetes.io/projected/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-kube-api-access-6lxr2\") pod \"watcher-2262-account-create-update-45hjg\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.223606 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-operator-scripts\") pod \"watcher-2262-account-create-update-45hjg\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.233191 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="153dc5a5-f545-4be6-b170-2e0f5b64939e" path="/var/lib/kubelet/pods/153dc5a5-f545-4be6-b170-2e0f5b64939e/volumes" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.248597 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8883-account-create-update-zdkht"] Feb 18 19:36:56 crc kubenswrapper[4754]: W0218 19:36:56.251794 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddedaac0d_ebad_497d_9b8c_b6ee470782f2.slice/crio-058b21a125a7e00dce8523f64532d210826078b73e4e334a6174e9c8822351f4 WatchSource:0}: Error finding container 058b21a125a7e00dce8523f64532d210826078b73e4e334a6174e9c8822351f4: Status 404 returned error can't find the container with id 058b21a125a7e00dce8523f64532d210826078b73e4e334a6174e9c8822351f4 Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.260442 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxr2\" (UniqueName: \"kubernetes.io/projected/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-kube-api-access-6lxr2\") pod \"watcher-2262-account-create-update-45hjg\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: W0218 19:36:56.264307 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0e6457_f44a_4059_a466_328fde68deaa.slice/crio-c30c2f8e1a99a715e25a822d46a5f8316e85ef545b536d403076920157164bcf WatchSource:0}: Error finding container c30c2f8e1a99a715e25a822d46a5f8316e85ef545b536d403076920157164bcf: Status 404 returned error can't find the container with id c30c2f8e1a99a715e25a822d46a5f8316e85ef545b536d403076920157164bcf Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.266183 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9bd7-account-create-update-hzjql"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.303019 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-7l2r8" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.473736 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.568433 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9bcwf-config-qt7kh"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.823029 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-7l2r8"] Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.853748 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nkms6" event={"ID":"4c661122-825e-4bfd-bdbb-b89b44361abb","Type":"ContainerStarted","Data":"e2d6e1a18b7b071d0b939608fb10b1f2ce324ee7536258e8789a0b525d547a31"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.862654 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dqt2f" event={"ID":"37bb31ad-8178-4924-ac57-6b8325e3cafa","Type":"ContainerStarted","Data":"a5637c29b57c1704f2d8eb6bf1dbe2fe8d40241df30333bda4cabd7fc0b027dd"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.881683 4754 generic.go:334] "Generic (PLEG): container finished" podID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerID="8dba12de5efdeb76fc4a632f246c1d52ee9b8ba428ec33056f2bef2b2ac692e4" exitCode=0 Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.881782 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d87128e7-abb0-4dd7-9b9f-04a4393c2313","Type":"ContainerDied","Data":"8dba12de5efdeb76fc4a632f246c1d52ee9b8ba428ec33056f2bef2b2ac692e4"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.888337 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bd7-account-create-update-hzjql" event={"ID":"dd0e6457-f44a-4059-a466-328fde68deaa","Type":"ContainerStarted","Data":"c30c2f8e1a99a715e25a822d46a5f8316e85ef545b536d403076920157164bcf"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.891304 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8883-account-create-update-zdkht" event={"ID":"dedaac0d-ebad-497d-9b8c-b6ee470782f2","Type":"ContainerStarted","Data":"058b21a125a7e00dce8523f64532d210826078b73e4e334a6174e9c8822351f4"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.893638 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bcwf-config-qt7kh" event={"ID":"b0eba25e-9371-4594-b9b1-ab608ea8f4ec","Type":"ContainerStarted","Data":"505dd57901dd2753aed8dc48ef020fe824a51914d63918ee1a2f9e58488bba91"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.895534 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8lf58" event={"ID":"da8323ff-9b88-4d7f-b400-3425c16e92d0","Type":"ContainerStarted","Data":"d8f04a8f0314382a747228bbdbfb358bb64ede7916eb5b0a2d2c9aefdd7c2385"} Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.946625 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-snqpk" podStartSLOduration=8.946603437 podStartE2EDuration="8.946603437s" podCreationTimestamp="2026-02-18 19:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:56.940801079 +0000 UTC m=+1119.391213875" watchObservedRunningTime="2026-02-18 19:36:56.946603437 +0000 UTC m=+1119.397016233" Feb 18 19:36:56 crc kubenswrapper[4754]: I0218 19:36:56.979567 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-2262-account-create-update-45hjg"] Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.910827 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ltst9" event={"ID":"fb85f580-fcb7-43ef-ac52-078b28e014f7","Type":"ContainerStarted","Data":"8757c0041c521b210d90d1dfd35bd4fe1cb8d48b9d64d682a7635c2ad0362d92"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.923334 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8883-account-create-update-zdkht" event={"ID":"dedaac0d-ebad-497d-9b8c-b6ee470782f2","Type":"ContainerStarted","Data":"79052e4ab08825dd4c8f4c4b65e9e53cca6a8cd1a355877764ee9c4a80d26a87"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.926927 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2262-account-create-update-45hjg" event={"ID":"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7","Type":"ContainerStarted","Data":"02394fbf02096656bc1d0541cf207138700eaff999ec2f412e0aa60b1ffe1486"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.929843 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nkms6" event={"ID":"4c661122-825e-4bfd-bdbb-b89b44361abb","Type":"ContainerStarted","Data":"f0ae20e56636f2f58e170665eb94703c89ec3b120c6e990c3a35db22f4d431ce"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.932506 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dqt2f" event={"ID":"37bb31ad-8178-4924-ac57-6b8325e3cafa","Type":"ContainerStarted","Data":"d5482053b9ffc15608039f82b5c85e977bcab1b84011872ddb54ebfdda6a02af"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.940580 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9bc3-account-create-update-hrjdd" event={"ID":"5616676f-6b15-4f24-aa3f-5d88ad180239","Type":"ContainerStarted","Data":"86659376ef5c511429490cbc5db87d2db3d4570426e400072487fe2f3a5219ee"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.942225 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-7l2r8" event={"ID":"abef9d48-efe9-4491-96cd-a1cd94fecfe1","Type":"ContainerStarted","Data":"e0cd5269ddf50602eb1b4379f90da632c3e67719fe386aff5718300860b1c3a2"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.945599 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bd7-account-create-update-hzjql" event={"ID":"dd0e6457-f44a-4059-a466-328fde68deaa","Type":"ContainerStarted","Data":"863fb4218b0293b802ec35dd63a1f84a5251700d09e4c0ab73e51302f9873420"} Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.947860 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-ltst9" podStartSLOduration=7.947820344 podStartE2EDuration="7.947820344s" podCreationTimestamp="2026-02-18 19:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:57.930626673 +0000 UTC m=+1120.381039469" watchObservedRunningTime="2026-02-18 19:36:57.947820344 +0000 UTC m=+1120.398233140" Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.974416 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8883-account-create-update-zdkht" podStartSLOduration=3.974389024 podStartE2EDuration="3.974389024s" podCreationTimestamp="2026-02-18 19:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:57.963300411 +0000 UTC m=+1120.413713217" watchObservedRunningTime="2026-02-18 19:36:57.974389024 +0000 UTC m=+1120.424801820" Feb 18 19:36:57 crc kubenswrapper[4754]: I0218 19:36:57.983710 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nkms6" podStartSLOduration=3.983685581 podStartE2EDuration="3.983685581s" podCreationTimestamp="2026-02-18 19:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:57.980361578 +0000 UTC m=+1120.430774384" watchObservedRunningTime="2026-02-18 19:36:57.983685581 +0000 UTC m=+1120.434098387" Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.011458 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9bd7-account-create-update-hzjql" podStartSLOduration=4.011432558 podStartE2EDuration="4.011432558s" podCreationTimestamp="2026-02-18 19:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:57.999913042 +0000 UTC m=+1120.450325848" watchObservedRunningTime="2026-02-18 19:36:58.011432558 +0000 UTC m=+1120.461845344" Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.031551 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8lf58" podStartSLOduration=6.570650386 podStartE2EDuration="21.031513608s" podCreationTimestamp="2026-02-18 19:36:37 +0000 UTC" firstStartedPulling="2026-02-18 19:36:40.938181159 +0000 UTC m=+1103.388593955" lastFinishedPulling="2026-02-18 19:36:55.399044381 +0000 UTC m=+1117.849457177" observedRunningTime="2026-02-18 19:36:58.018852527 +0000 UTC m=+1120.469265343" watchObservedRunningTime="2026-02-18 19:36:58.031513608 +0000 UTC m=+1120.481926404" Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.041835 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9bc3-account-create-update-hrjdd" podStartSLOduration=10.041813436 podStartE2EDuration="10.041813436s" podCreationTimestamp="2026-02-18 19:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:58.038213415 +0000 UTC m=+1120.488626221" watchObservedRunningTime="2026-02-18 19:36:58.041813436 +0000 UTC m=+1120.492226242" Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.061358 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-dqt2f" podStartSLOduration=4.061333239 podStartE2EDuration="4.061333239s" podCreationTimestamp="2026-02-18 19:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:58.052426433 +0000 UTC m=+1120.502839239" watchObservedRunningTime="2026-02-18 19:36:58.061333239 +0000 UTC m=+1120.511746035" Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.963663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerStarted","Data":"9c5896770be384d38ce1aa3dc0cc6e57ff321dc388b3752178617ca239c8d1bc"} Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.968554 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-7l2r8" event={"ID":"abef9d48-efe9-4491-96cd-a1cd94fecfe1","Type":"ContainerStarted","Data":"8362313fdf35e4cd5d50bc3e5125fb41f7f6f3d6ecfac1e64f9416ec40c08f61"} Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.977172 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d87128e7-abb0-4dd7-9b9f-04a4393c2313","Type":"ContainerStarted","Data":"a69dbc60091260ceda33a8fe29c15bed8145465523e90ca8c0e175f1a682f469"} Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.977498 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.979432 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2262-account-create-update-45hjg" event={"ID":"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7","Type":"ContainerStarted","Data":"327a1b3c02eb0f9e90847177221fa1219025fe1b9e107dc0085f0b09c2cc96f1"} Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.985849 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0eba25e-9371-4594-b9b1-ab608ea8f4ec" containerID="a18468bd4755cfe83e3d66e613b0bd58c6cd0663fa300adb40aecd0b9c589f74" exitCode=0 Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.985861 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bcwf-config-qt7kh" event={"ID":"b0eba25e-9371-4594-b9b1-ab608ea8f4ec","Type":"ContainerDied","Data":"a18468bd4755cfe83e3d66e613b0bd58c6cd0663fa300adb40aecd0b9c589f74"} Feb 18 19:36:58 crc kubenswrapper[4754]: I0218 19:36:58.991844 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-7l2r8" podStartSLOduration=3.991823271 podStartE2EDuration="3.991823271s" podCreationTimestamp="2026-02-18 19:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:58.989430587 +0000 UTC m=+1121.439843384" watchObservedRunningTime="2026-02-18 19:36:58.991823271 +0000 UTC m=+1121.442236067" Feb 18 19:36:59 crc kubenswrapper[4754]: I0218 19:36:59.020271 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-2262-account-create-update-45hjg" podStartSLOduration=3.020246779 podStartE2EDuration="3.020246779s" podCreationTimestamp="2026-02-18 19:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:59.016559455 +0000 UTC m=+1121.466972261" watchObservedRunningTime="2026-02-18 19:36:59.020246779 +0000 UTC m=+1121.470659575" Feb 18 19:36:59 crc kubenswrapper[4754]: I0218 19:36:59.068737 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.850171816 podStartE2EDuration="1m20.068710245s" podCreationTimestamp="2026-02-18 19:35:39 +0000 UTC" firstStartedPulling="2026-02-18 19:35:41.634443296 +0000 UTC m=+1044.084856082" lastFinishedPulling="2026-02-18 19:36:21.852981715 +0000 UTC m=+1084.303394511" observedRunningTime="2026-02-18 19:36:59.066311061 +0000 UTC m=+1121.516723857" watchObservedRunningTime="2026-02-18 19:36:59.068710245 +0000 UTC m=+1121.519123061" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:36:59.999945 4754 generic.go:334] "Generic (PLEG): container finished" podID="dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" containerID="d0e224b8c48c819419055bd2162941d41d9355b9d49e6d30e1b2c161a7b2b00c" exitCode=0 Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.000015 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snqpk" event={"ID":"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4","Type":"ContainerDied","Data":"d0e224b8c48c819419055bd2162941d41d9355b9d49e6d30e1b2c161a7b2b00c"} Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.269274 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9bcwf" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.421433 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.529850 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run\") pod \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.529980 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run" (OuterVolumeSpecName: "var-run") pod "b0eba25e-9371-4594-b9b1-ab608ea8f4ec" (UID: "b0eba25e-9371-4594-b9b1-ab608ea8f4ec"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530004 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-additional-scripts\") pod \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530161 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-scripts\") pod \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxq4\" (UniqueName: \"kubernetes.io/projected/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-kube-api-access-fvxq4\") pod \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530353 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-log-ovn\") pod \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530478 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run-ovn\") pod \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\" (UID: \"b0eba25e-9371-4594-b9b1-ab608ea8f4ec\") " Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530679 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b0eba25e-9371-4594-b9b1-ab608ea8f4ec" (UID: "b0eba25e-9371-4594-b9b1-ab608ea8f4ec"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530750 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b0eba25e-9371-4594-b9b1-ab608ea8f4ec" (UID: "b0eba25e-9371-4594-b9b1-ab608ea8f4ec"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.530749 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b0eba25e-9371-4594-b9b1-ab608ea8f4ec" (UID: "b0eba25e-9371-4594-b9b1-ab608ea8f4ec"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.531411 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-scripts" (OuterVolumeSpecName: "scripts") pod "b0eba25e-9371-4594-b9b1-ab608ea8f4ec" (UID: "b0eba25e-9371-4594-b9b1-ab608ea8f4ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.531736 4754 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.531810 4754 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.531873 4754 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.531935 4754 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.532007 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.538006 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-kube-api-access-fvxq4" (OuterVolumeSpecName: "kube-api-access-fvxq4") pod "b0eba25e-9371-4594-b9b1-ab608ea8f4ec" (UID: "b0eba25e-9371-4594-b9b1-ab608ea8f4ec"). InnerVolumeSpecName "kube-api-access-fvxq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4754]: I0218 19:37:00.634335 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxq4\" (UniqueName: \"kubernetes.io/projected/b0eba25e-9371-4594-b9b1-ab608ea8f4ec-kube-api-access-fvxq4\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.011671 4754 generic.go:334] "Generic (PLEG): container finished" podID="fb85f580-fcb7-43ef-ac52-078b28e014f7" containerID="8757c0041c521b210d90d1dfd35bd4fe1cb8d48b9d64d682a7635c2ad0362d92" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.011778 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ltst9" event={"ID":"fb85f580-fcb7-43ef-ac52-078b28e014f7","Type":"ContainerDied","Data":"8757c0041c521b210d90d1dfd35bd4fe1cb8d48b9d64d682a7635c2ad0362d92"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.013315 4754 generic.go:334] "Generic (PLEG): container finished" podID="dedaac0d-ebad-497d-9b8c-b6ee470782f2" containerID="79052e4ab08825dd4c8f4c4b65e9e53cca6a8cd1a355877764ee9c4a80d26a87" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.013430 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8883-account-create-update-zdkht" event={"ID":"dedaac0d-ebad-497d-9b8c-b6ee470782f2","Type":"ContainerDied","Data":"79052e4ab08825dd4c8f4c4b65e9e53cca6a8cd1a355877764ee9c4a80d26a87"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.015394 4754 generic.go:334] "Generic (PLEG): container finished" podID="042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" containerID="327a1b3c02eb0f9e90847177221fa1219025fe1b9e107dc0085f0b09c2cc96f1" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.015457 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2262-account-create-update-45hjg" event={"ID":"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7","Type":"ContainerDied","Data":"327a1b3c02eb0f9e90847177221fa1219025fe1b9e107dc0085f0b09c2cc96f1"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.017579 4754 generic.go:334] "Generic (PLEG): container finished" podID="4c661122-825e-4bfd-bdbb-b89b44361abb" containerID="f0ae20e56636f2f58e170665eb94703c89ec3b120c6e990c3a35db22f4d431ce" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.017673 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nkms6" event={"ID":"4c661122-825e-4bfd-bdbb-b89b44361abb","Type":"ContainerDied","Data":"f0ae20e56636f2f58e170665eb94703c89ec3b120c6e990c3a35db22f4d431ce"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.019097 4754 generic.go:334] "Generic (PLEG): container finished" podID="37bb31ad-8178-4924-ac57-6b8325e3cafa" containerID="d5482053b9ffc15608039f82b5c85e977bcab1b84011872ddb54ebfdda6a02af" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.019188 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dqt2f" event={"ID":"37bb31ad-8178-4924-ac57-6b8325e3cafa","Type":"ContainerDied","Data":"d5482053b9ffc15608039f82b5c85e977bcab1b84011872ddb54ebfdda6a02af"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.031706 4754 generic.go:334] "Generic (PLEG): container finished" podID="abef9d48-efe9-4491-96cd-a1cd94fecfe1" containerID="8362313fdf35e4cd5d50bc3e5125fb41f7f6f3d6ecfac1e64f9416ec40c08f61" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.031775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-7l2r8" event={"ID":"abef9d48-efe9-4491-96cd-a1cd94fecfe1","Type":"ContainerDied","Data":"8362313fdf35e4cd5d50bc3e5125fb41f7f6f3d6ecfac1e64f9416ec40c08f61"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.040622 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9bcwf-config-qt7kh" event={"ID":"b0eba25e-9371-4594-b9b1-ab608ea8f4ec","Type":"ContainerDied","Data":"505dd57901dd2753aed8dc48ef020fe824a51914d63918ee1a2f9e58488bba91"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.040661 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9bcwf-config-qt7kh" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.040677 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="505dd57901dd2753aed8dc48ef020fe824a51914d63918ee1a2f9e58488bba91" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.043905 4754 generic.go:334] "Generic (PLEG): container finished" podID="5616676f-6b15-4f24-aa3f-5d88ad180239" containerID="86659376ef5c511429490cbc5db87d2db3d4570426e400072487fe2f3a5219ee" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.044166 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9bc3-account-create-update-hrjdd" event={"ID":"5616676f-6b15-4f24-aa3f-5d88ad180239","Type":"ContainerDied","Data":"86659376ef5c511429490cbc5db87d2db3d4570426e400072487fe2f3a5219ee"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.047465 4754 generic.go:334] "Generic (PLEG): container finished" podID="dd0e6457-f44a-4059-a466-328fde68deaa" containerID="863fb4218b0293b802ec35dd63a1f84a5251700d09e4c0ab73e51302f9873420" exitCode=0 Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.047582 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bd7-account-create-update-hzjql" event={"ID":"dd0e6457-f44a-4059-a466-328fde68deaa","Type":"ContainerDied","Data":"863fb4218b0293b802ec35dd63a1f84a5251700d09e4c0ab73e51302f9873420"} Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.416659 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snqpk" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.540791 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9bcwf-config-qt7kh"] Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.548778 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9bcwf-config-qt7kh"] Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.583638 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gwfm\" (UniqueName: \"kubernetes.io/projected/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-kube-api-access-2gwfm\") pod \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.583766 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-operator-scripts\") pod \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\" (UID: \"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4\") " Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.584657 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" (UID: "dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.607885 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-kube-api-access-2gwfm" (OuterVolumeSpecName: "kube-api-access-2gwfm") pod "dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" (UID: "dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4"). InnerVolumeSpecName "kube-api-access-2gwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.686001 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gwfm\" (UniqueName: \"kubernetes.io/projected/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-kube-api-access-2gwfm\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:01 crc kubenswrapper[4754]: I0218 19:37:01.686037 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:02 crc kubenswrapper[4754]: I0218 19:37:02.059205 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snqpk" event={"ID":"dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4","Type":"ContainerDied","Data":"6f8fe606f3d087fd79b4416d9517fa231ff3b5a66406cf0ec65184b38048d077"} Feb 18 19:37:02 crc kubenswrapper[4754]: I0218 19:37:02.059269 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snqpk" Feb 18 19:37:02 crc kubenswrapper[4754]: I0218 19:37:02.059285 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8fe606f3d087fd79b4416d9517fa231ff3b5a66406cf0ec65184b38048d077" Feb 18 19:37:02 crc kubenswrapper[4754]: I0218 19:37:02.251356 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0eba25e-9371-4594-b9b1-ab608ea8f4ec" path="/var/lib/kubelet/pods/b0eba25e-9371-4594-b9b1-ab608ea8f4ec/volumes" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.086162 4754 generic.go:334] "Generic (PLEG): container finished" podID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerID="8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693" exitCode=0 Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.086247 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c266c06-8bfc-47ba-bab9-6ef36d6294e5","Type":"ContainerDied","Data":"8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693"} Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.506930 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nkms6" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.518730 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.522390 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ltst9" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.530857 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0e6457-f44a-4059-a466-328fde68deaa-operator-scripts\") pod \"dd0e6457-f44a-4059-a466-328fde68deaa\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.530955 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c661122-825e-4bfd-bdbb-b89b44361abb-operator-scripts\") pod \"4c661122-825e-4bfd-bdbb-b89b44361abb\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531198 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb85f580-fcb7-43ef-ac52-078b28e014f7-operator-scripts\") pod \"fb85f580-fcb7-43ef-ac52-078b28e014f7\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqsj9\" (UniqueName: \"kubernetes.io/projected/fb85f580-fcb7-43ef-ac52-078b28e014f7-kube-api-access-mqsj9\") pod \"fb85f580-fcb7-43ef-ac52-078b28e014f7\" (UID: \"fb85f580-fcb7-43ef-ac52-078b28e014f7\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531300 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr27n\" (UniqueName: \"kubernetes.io/projected/dd0e6457-f44a-4059-a466-328fde68deaa-kube-api-access-nr27n\") pod \"dd0e6457-f44a-4059-a466-328fde68deaa\" (UID: \"dd0e6457-f44a-4059-a466-328fde68deaa\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531428 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjv5c\" (UniqueName: \"kubernetes.io/projected/4c661122-825e-4bfd-bdbb-b89b44361abb-kube-api-access-xjv5c\") pod \"4c661122-825e-4bfd-bdbb-b89b44361abb\" (UID: \"4c661122-825e-4bfd-bdbb-b89b44361abb\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531825 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb85f580-fcb7-43ef-ac52-078b28e014f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb85f580-fcb7-43ef-ac52-078b28e014f7" (UID: "fb85f580-fcb7-43ef-ac52-078b28e014f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531879 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c661122-825e-4bfd-bdbb-b89b44361abb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c661122-825e-4bfd-bdbb-b89b44361abb" (UID: "4c661122-825e-4bfd-bdbb-b89b44361abb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.531910 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0e6457-f44a-4059-a466-328fde68deaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd0e6457-f44a-4059-a466-328fde68deaa" (UID: "dd0e6457-f44a-4059-a466-328fde68deaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.532189 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0e6457-f44a-4059-a466-328fde68deaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.532220 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c661122-825e-4bfd-bdbb-b89b44361abb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.532234 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb85f580-fcb7-43ef-ac52-078b28e014f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.538421 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0e6457-f44a-4059-a466-328fde68deaa-kube-api-access-nr27n" (OuterVolumeSpecName: "kube-api-access-nr27n") pod "dd0e6457-f44a-4059-a466-328fde68deaa" (UID: "dd0e6457-f44a-4059-a466-328fde68deaa"). InnerVolumeSpecName "kube-api-access-nr27n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.540113 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-7l2r8" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.541366 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb85f580-fcb7-43ef-ac52-078b28e014f7-kube-api-access-mqsj9" (OuterVolumeSpecName: "kube-api-access-mqsj9") pod "fb85f580-fcb7-43ef-ac52-078b28e014f7" (UID: "fb85f580-fcb7-43ef-ac52-078b28e014f7"). InnerVolumeSpecName "kube-api-access-mqsj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.548725 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c661122-825e-4bfd-bdbb-b89b44361abb-kube-api-access-xjv5c" (OuterVolumeSpecName: "kube-api-access-xjv5c") pod "4c661122-825e-4bfd-bdbb-b89b44361abb" (UID: "4c661122-825e-4bfd-bdbb-b89b44361abb"). InnerVolumeSpecName "kube-api-access-xjv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.635617 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqsj9\" (UniqueName: \"kubernetes.io/projected/fb85f580-fcb7-43ef-ac52-078b28e014f7-kube-api-access-mqsj9\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.635666 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr27n\" (UniqueName: \"kubernetes.io/projected/dd0e6457-f44a-4059-a466-328fde68deaa-kube-api-access-nr27n\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.635678 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjv5c\" (UniqueName: \"kubernetes.io/projected/4c661122-825e-4bfd-bdbb-b89b44361abb-kube-api-access-xjv5c\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.643254 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dqt2f" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.667017 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.675679 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.697740 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.737070 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abef9d48-efe9-4491-96cd-a1cd94fecfe1-operator-scripts\") pod \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.737582 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ttpq\" (UniqueName: \"kubernetes.io/projected/abef9d48-efe9-4491-96cd-a1cd94fecfe1-kube-api-access-5ttpq\") pod \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\" (UID: \"abef9d48-efe9-4491-96cd-a1cd94fecfe1\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.742788 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abef9d48-efe9-4491-96cd-a1cd94fecfe1-kube-api-access-5ttpq" (OuterVolumeSpecName: "kube-api-access-5ttpq") pod "abef9d48-efe9-4491-96cd-a1cd94fecfe1" (UID: "abef9d48-efe9-4491-96cd-a1cd94fecfe1"). InnerVolumeSpecName "kube-api-access-5ttpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.744153 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abef9d48-efe9-4491-96cd-a1cd94fecfe1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "abef9d48-efe9-4491-96cd-a1cd94fecfe1" (UID: "abef9d48-efe9-4491-96cd-a1cd94fecfe1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839223 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616676f-6b15-4f24-aa3f-5d88ad180239-operator-scripts\") pod \"5616676f-6b15-4f24-aa3f-5d88ad180239\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839315 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/dedaac0d-ebad-497d-9b8c-b6ee470782f2-kube-api-access-jmn6d\") pod \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839392 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5ntx\" (UniqueName: \"kubernetes.io/projected/5616676f-6b15-4f24-aa3f-5d88ad180239-kube-api-access-x5ntx\") pod \"5616676f-6b15-4f24-aa3f-5d88ad180239\" (UID: \"5616676f-6b15-4f24-aa3f-5d88ad180239\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839447 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lxr2\" (UniqueName: \"kubernetes.io/projected/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-kube-api-access-6lxr2\") pod \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839521 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedaac0d-ebad-497d-9b8c-b6ee470782f2-operator-scripts\") pod \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\" (UID: \"dedaac0d-ebad-497d-9b8c-b6ee470782f2\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839544 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37bb31ad-8178-4924-ac57-6b8325e3cafa-operator-scripts\") pod \"37bb31ad-8178-4924-ac57-6b8325e3cafa\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839668 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2p6x\" (UniqueName: \"kubernetes.io/projected/37bb31ad-8178-4924-ac57-6b8325e3cafa-kube-api-access-p2p6x\") pod \"37bb31ad-8178-4924-ac57-6b8325e3cafa\" (UID: \"37bb31ad-8178-4924-ac57-6b8325e3cafa\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839700 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-operator-scripts\") pod \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\" (UID: \"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7\") " Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.839879 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5616676f-6b15-4f24-aa3f-5d88ad180239-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5616676f-6b15-4f24-aa3f-5d88ad180239" (UID: "5616676f-6b15-4f24-aa3f-5d88ad180239"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.840232 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abef9d48-efe9-4491-96cd-a1cd94fecfe1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.840258 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ttpq\" (UniqueName: \"kubernetes.io/projected/abef9d48-efe9-4491-96cd-a1cd94fecfe1-kube-api-access-5ttpq\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.840272 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5616676f-6b15-4f24-aa3f-5d88ad180239-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.840503 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37bb31ad-8178-4924-ac57-6b8325e3cafa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37bb31ad-8178-4924-ac57-6b8325e3cafa" (UID: "37bb31ad-8178-4924-ac57-6b8325e3cafa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.840505 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" (UID: "042ec2fb-f4b7-4310-a6eb-6f71c8e440c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.840588 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedaac0d-ebad-497d-9b8c-b6ee470782f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dedaac0d-ebad-497d-9b8c-b6ee470782f2" (UID: "dedaac0d-ebad-497d-9b8c-b6ee470782f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.843499 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-kube-api-access-6lxr2" (OuterVolumeSpecName: "kube-api-access-6lxr2") pod "042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" (UID: "042ec2fb-f4b7-4310-a6eb-6f71c8e440c7"). InnerVolumeSpecName "kube-api-access-6lxr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.846058 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bb31ad-8178-4924-ac57-6b8325e3cafa-kube-api-access-p2p6x" (OuterVolumeSpecName: "kube-api-access-p2p6x") pod "37bb31ad-8178-4924-ac57-6b8325e3cafa" (UID: "37bb31ad-8178-4924-ac57-6b8325e3cafa"). InnerVolumeSpecName "kube-api-access-p2p6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.848161 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5616676f-6b15-4f24-aa3f-5d88ad180239-kube-api-access-x5ntx" (OuterVolumeSpecName: "kube-api-access-x5ntx") pod "5616676f-6b15-4f24-aa3f-5d88ad180239" (UID: "5616676f-6b15-4f24-aa3f-5d88ad180239"). InnerVolumeSpecName "kube-api-access-x5ntx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.848256 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedaac0d-ebad-497d-9b8c-b6ee470782f2-kube-api-access-jmn6d" (OuterVolumeSpecName: "kube-api-access-jmn6d") pod "dedaac0d-ebad-497d-9b8c-b6ee470782f2" (UID: "dedaac0d-ebad-497d-9b8c-b6ee470782f2"). InnerVolumeSpecName "kube-api-access-jmn6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941855 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lxr2\" (UniqueName: \"kubernetes.io/projected/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-kube-api-access-6lxr2\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941897 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedaac0d-ebad-497d-9b8c-b6ee470782f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941907 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37bb31ad-8178-4924-ac57-6b8325e3cafa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941916 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2p6x\" (UniqueName: \"kubernetes.io/projected/37bb31ad-8178-4924-ac57-6b8325e3cafa-kube-api-access-p2p6x\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941926 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941935 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmn6d\" (UniqueName: \"kubernetes.io/projected/dedaac0d-ebad-497d-9b8c-b6ee470782f2-kube-api-access-jmn6d\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:03 crc kubenswrapper[4754]: I0218 19:37:03.941945 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5ntx\" (UniqueName: \"kubernetes.io/projected/5616676f-6b15-4f24-aa3f-5d88ad180239-kube-api-access-x5ntx\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.098532 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-7l2r8" event={"ID":"abef9d48-efe9-4491-96cd-a1cd94fecfe1","Type":"ContainerDied","Data":"e0cd5269ddf50602eb1b4379f90da632c3e67719fe386aff5718300860b1c3a2"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.098584 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-7l2r8" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.098593 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0cd5269ddf50602eb1b4379f90da632c3e67719fe386aff5718300860b1c3a2" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.100398 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ltst9" event={"ID":"fb85f580-fcb7-43ef-ac52-078b28e014f7","Type":"ContainerDied","Data":"ba9a8b9f8637224fc6c5800483cb277959825747c90e9da8c4082b6136cbe45c"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.100421 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a8b9f8637224fc6c5800483cb277959825747c90e9da8c4082b6136cbe45c" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.100502 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ltst9" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.103935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerStarted","Data":"6c6c418bcb7997c093e81b6c423c9040e1f5d9dec8140fce85bd446cbd642a6a"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.106121 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nkms6" event={"ID":"4c661122-825e-4bfd-bdbb-b89b44361abb","Type":"ContainerDied","Data":"e2d6e1a18b7b071d0b939608fb10b1f2ce324ee7536258e8789a0b525d547a31"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.106175 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2d6e1a18b7b071d0b939608fb10b1f2ce324ee7536258e8789a0b525d547a31" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.106221 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nkms6" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.108502 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dqt2f" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.108509 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dqt2f" event={"ID":"37bb31ad-8178-4924-ac57-6b8325e3cafa","Type":"ContainerDied","Data":"a5637c29b57c1704f2d8eb6bf1dbe2fe8d40241df30333bda4cabd7fc0b027dd"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.108613 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5637c29b57c1704f2d8eb6bf1dbe2fe8d40241df30333bda4cabd7fc0b027dd" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.111088 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9bc3-account-create-update-hrjdd" event={"ID":"5616676f-6b15-4f24-aa3f-5d88ad180239","Type":"ContainerDied","Data":"35264f947c12bc1132996cb32de1c694362d2429cb639a27a143f4a65c745715"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.111115 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35264f947c12bc1132996cb32de1c694362d2429cb639a27a143f4a65c745715" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.111244 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9bc3-account-create-update-hrjdd" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.113544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c266c06-8bfc-47ba-bab9-6ef36d6294e5","Type":"ContainerStarted","Data":"b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.113775 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.115759 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bd7-account-create-update-hzjql" event={"ID":"dd0e6457-f44a-4059-a466-328fde68deaa","Type":"ContainerDied","Data":"c30c2f8e1a99a715e25a822d46a5f8316e85ef545b536d403076920157164bcf"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.115788 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c30c2f8e1a99a715e25a822d46a5f8316e85ef545b536d403076920157164bcf" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.115893 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bd7-account-create-update-hzjql" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.117706 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8883-account-create-update-zdkht" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.117769 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8883-account-create-update-zdkht" event={"ID":"dedaac0d-ebad-497d-9b8c-b6ee470782f2","Type":"ContainerDied","Data":"058b21a125a7e00dce8523f64532d210826078b73e4e334a6174e9c8822351f4"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.117840 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="058b21a125a7e00dce8523f64532d210826078b73e4e334a6174e9c8822351f4" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.120100 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-2262-account-create-update-45hjg" event={"ID":"042ec2fb-f4b7-4310-a6eb-6f71c8e440c7","Type":"ContainerDied","Data":"02394fbf02096656bc1d0541cf207138700eaff999ec2f412e0aa60b1ffe1486"} Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.120164 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02394fbf02096656bc1d0541cf207138700eaff999ec2f412e0aa60b1ffe1486" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.120206 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-2262-account-create-update-45hjg" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.156670 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.941061585 podStartE2EDuration="1m19.156642334s" podCreationTimestamp="2026-02-18 19:35:45 +0000 UTC" firstStartedPulling="2026-02-18 19:35:48.298119582 +0000 UTC m=+1050.748532378" lastFinishedPulling="2026-02-18 19:37:03.513700331 +0000 UTC m=+1125.964113127" observedRunningTime="2026-02-18 19:37:04.147115319 +0000 UTC m=+1126.597528115" watchObservedRunningTime="2026-02-18 19:37:04.156642334 +0000 UTC m=+1126.607055130" Feb 18 19:37:04 crc kubenswrapper[4754]: I0218 19:37:04.191343 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371951.663454 podStartE2EDuration="1m25.191322635s" podCreationTimestamp="2026-02-18 19:35:39 +0000 UTC" firstStartedPulling="2026-02-18 19:35:41.558171569 +0000 UTC m=+1044.008584365" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:04.190240521 +0000 UTC m=+1126.640653337" watchObservedRunningTime="2026-02-18 19:37:04.191322635 +0000 UTC m=+1126.641735431" Feb 18 19:37:04 crc kubenswrapper[4754]: E0218 19:37:04.695993 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda8323ff_9b88_4d7f_b400_3425c16e92d0.slice/crio-d8f04a8f0314382a747228bbdbfb358bb64ede7916eb5b0a2d2c9aefdd7c2385.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:37:05 crc kubenswrapper[4754]: I0218 19:37:05.129893 4754 generic.go:334] "Generic (PLEG): container finished" podID="da8323ff-9b88-4d7f-b400-3425c16e92d0" containerID="d8f04a8f0314382a747228bbdbfb358bb64ede7916eb5b0a2d2c9aefdd7c2385" exitCode=0 Feb 18 19:37:05 crc kubenswrapper[4754]: I0218 19:37:05.129973 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8lf58" event={"ID":"da8323ff-9b88-4d7f-b400-3425c16e92d0","Type":"ContainerDied","Data":"d8f04a8f0314382a747228bbdbfb358bb64ede7916eb5b0a2d2c9aefdd7c2385"} Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.477275 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.602745 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-ring-data-devices\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603018 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-dispersionconf\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603052 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-combined-ca-bundle\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603083 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-scripts\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603169 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da8323ff-9b88-4d7f-b400-3425c16e92d0-etc-swift\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603193 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvs6w\" (UniqueName: \"kubernetes.io/projected/da8323ff-9b88-4d7f-b400-3425c16e92d0-kube-api-access-wvs6w\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603244 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-swiftconf\") pod \"da8323ff-9b88-4d7f-b400-3425c16e92d0\" (UID: \"da8323ff-9b88-4d7f-b400-3425c16e92d0\") " Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603377 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.603667 4754 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.604737 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8323ff-9b88-4d7f-b400-3425c16e92d0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.613837 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8323ff-9b88-4d7f-b400-3425c16e92d0-kube-api-access-wvs6w" (OuterVolumeSpecName: "kube-api-access-wvs6w") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "kube-api-access-wvs6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.614299 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.628743 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-scripts" (OuterVolumeSpecName: "scripts") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.630917 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.636991 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "da8323ff-9b88-4d7f-b400-3425c16e92d0" (UID: "da8323ff-9b88-4d7f-b400-3425c16e92d0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.705890 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvs6w\" (UniqueName: \"kubernetes.io/projected/da8323ff-9b88-4d7f-b400-3425c16e92d0-kube-api-access-wvs6w\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.705943 4754 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.705956 4754 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.705966 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da8323ff-9b88-4d7f-b400-3425c16e92d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.705979 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/da8323ff-9b88-4d7f-b400-3425c16e92d0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.705989 4754 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/da8323ff-9b88-4d7f-b400-3425c16e92d0-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.805576 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ltst9"] Feb 18 19:37:06 crc kubenswrapper[4754]: I0218 19:37:06.813647 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ltst9"] Feb 18 19:37:07 crc kubenswrapper[4754]: I0218 19:37:07.157843 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8lf58" event={"ID":"da8323ff-9b88-4d7f-b400-3425c16e92d0","Type":"ContainerDied","Data":"efbc54918d6cb58bff581f83e617089c9667d824d9b9d685ea2c07bc955c0ddd"} Feb 18 19:37:07 crc kubenswrapper[4754]: I0218 19:37:07.158234 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efbc54918d6cb58bff581f83e617089c9667d824d9b9d685ea2c07bc955c0ddd" Feb 18 19:37:07 crc kubenswrapper[4754]: I0218 19:37:07.158330 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8lf58" Feb 18 19:37:07 crc kubenswrapper[4754]: I0218 19:37:07.316650 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:08 crc kubenswrapper[4754]: I0218 19:37:08.096639 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:08 crc kubenswrapper[4754]: I0218 19:37:08.096709 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:08 crc kubenswrapper[4754]: I0218 19:37:08.096765 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:37:08 crc kubenswrapper[4754]: I0218 19:37:08.097647 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0026e2ecf3c88a72909f5c7c0de86e2f4abd80ac8afc7c18f8c5bf2f5f9229e"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:37:08 crc kubenswrapper[4754]: I0218 19:37:08.097702 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://c0026e2ecf3c88a72909f5c7c0de86e2f4abd80ac8afc7c18f8c5bf2f5f9229e" gracePeriod=600 Feb 18 19:37:08 crc kubenswrapper[4754]: I0218 19:37:08.221756 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb85f580-fcb7-43ef-ac52-078b28e014f7" path="/var/lib/kubelet/pods/fb85f580-fcb7-43ef-ac52-078b28e014f7/volumes" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.134745 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-f6s4l"] Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.135929 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb85f580-fcb7-43ef-ac52-078b28e014f7" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.135952 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb85f580-fcb7-43ef-ac52-078b28e014f7" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.135969 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c661122-825e-4bfd-bdbb-b89b44361abb" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.135977 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c661122-825e-4bfd-bdbb-b89b44361abb" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.135989 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8323ff-9b88-4d7f-b400-3425c16e92d0" containerName="swift-ring-rebalance" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.135998 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8323ff-9b88-4d7f-b400-3425c16e92d0" containerName="swift-ring-rebalance" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136007 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5616676f-6b15-4f24-aa3f-5d88ad180239" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136016 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5616676f-6b15-4f24-aa3f-5d88ad180239" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136032 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0e6457-f44a-4059-a466-328fde68deaa" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136040 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0e6457-f44a-4059-a466-328fde68deaa" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136054 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abef9d48-efe9-4491-96cd-a1cd94fecfe1" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136062 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="abef9d48-efe9-4491-96cd-a1cd94fecfe1" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136076 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bb31ad-8178-4924-ac57-6b8325e3cafa" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136083 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bb31ad-8178-4924-ac57-6b8325e3cafa" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136098 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0eba25e-9371-4594-b9b1-ab608ea8f4ec" containerName="ovn-config" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136107 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0eba25e-9371-4594-b9b1-ab608ea8f4ec" containerName="ovn-config" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136129 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136157 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136168 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedaac0d-ebad-497d-9b8c-b6ee470782f2" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136178 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedaac0d-ebad-497d-9b8c-b6ee470782f2" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: E0218 19:37:09.136191 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136199 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136387 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136397 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0eba25e-9371-4594-b9b1-ab608ea8f4ec" containerName="ovn-config" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136409 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8323ff-9b88-4d7f-b400-3425c16e92d0" containerName="swift-ring-rebalance" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136419 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="abef9d48-efe9-4491-96cd-a1cd94fecfe1" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136431 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c661122-825e-4bfd-bdbb-b89b44361abb" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136445 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bb31ad-8178-4924-ac57-6b8325e3cafa" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136456 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" containerName="mariadb-database-create" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136465 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb85f580-fcb7-43ef-ac52-078b28e014f7" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136472 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0e6457-f44a-4059-a466-328fde68deaa" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136479 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5616676f-6b15-4f24-aa3f-5d88ad180239" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.136494 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedaac0d-ebad-497d-9b8c-b6ee470782f2" containerName="mariadb-account-create-update" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.137252 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.139952 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ld58k" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.143626 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.151618 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-f6s4l"] Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.185651 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="c0026e2ecf3c88a72909f5c7c0de86e2f4abd80ac8afc7c18f8c5bf2f5f9229e" exitCode=0 Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.185728 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"c0026e2ecf3c88a72909f5c7c0de86e2f4abd80ac8afc7c18f8c5bf2f5f9229e"} Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.185809 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"96444caa3510b8204a97c50f5062d060301a59e158a321374c108effb01ab6a8"} Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.185836 4754 scope.go:117] "RemoveContainer" containerID="0b71273a5a5eb671bd4925b19c78d15799283dc68e22f711f6ec374c23ac8c87" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.271322 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-config-data\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.271403 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k86b\" (UniqueName: \"kubernetes.io/projected/0db8affb-2742-46e4-a19d-a907e5c6d28d-kube-api-access-9k86b\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.271770 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-db-sync-config-data\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.271940 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-combined-ca-bundle\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.373943 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-combined-ca-bundle\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.374058 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.374087 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-config-data\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.374135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k86b\" (UniqueName: \"kubernetes.io/projected/0db8affb-2742-46e4-a19d-a907e5c6d28d-kube-api-access-9k86b\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.374247 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-db-sync-config-data\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.386104 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-config-data\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.403853 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-db-sync-config-data\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.404764 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-combined-ca-bundle\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.404802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ecc731f-ea98-4469-be08-1a12088339b5-etc-swift\") pod \"swift-storage-0\" (UID: \"8ecc731f-ea98-4469-be08-1a12088339b5\") " pod="openstack/swift-storage-0" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.412992 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k86b\" (UniqueName: \"kubernetes.io/projected/0db8affb-2742-46e4-a19d-a907e5c6d28d-kube-api-access-9k86b\") pod \"glance-db-sync-f6s4l\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.465835 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f6s4l" Feb 18 19:37:09 crc kubenswrapper[4754]: I0218 19:37:09.701547 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:37:10 crc kubenswrapper[4754]: I0218 19:37:10.074419 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-f6s4l"] Feb 18 19:37:10 crc kubenswrapper[4754]: I0218 19:37:10.195761 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f6s4l" event={"ID":"0db8affb-2742-46e4-a19d-a907e5c6d28d","Type":"ContainerStarted","Data":"08dba6bf1d8cbc47a6ca5ef10acf781ebc317295d12631cba369c1e1b242f2f3"} Feb 18 19:37:10 crc kubenswrapper[4754]: I0218 19:37:10.323449 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:37:10 crc kubenswrapper[4754]: W0218 19:37:10.324890 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ecc731f_ea98_4469_be08_1a12088339b5.slice/crio-0a449a296085ab9bde026acbd223f75239ce4d1427147faf894bfbc47efc1fc5 WatchSource:0}: Error finding container 0a449a296085ab9bde026acbd223f75239ce4d1427147faf894bfbc47efc1fc5: Status 404 returned error can't find the container with id 0a449a296085ab9bde026acbd223f75239ce4d1427147faf894bfbc47efc1fc5 Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.167445 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.213866 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"0a449a296085ab9bde026acbd223f75239ce4d1427147faf894bfbc47efc1fc5"} Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.815374 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4zlg5"] Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.816679 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.820331 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.836902 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4zlg5"] Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.924852 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5thp\" (UniqueName: \"kubernetes.io/projected/0899bd7b-2289-4496-8521-c8dbea7874f7-kube-api-access-w5thp\") pod \"root-account-create-update-4zlg5\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:11 crc kubenswrapper[4754]: I0218 19:37:11.924913 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0899bd7b-2289-4496-8521-c8dbea7874f7-operator-scripts\") pod \"root-account-create-update-4zlg5\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:12 crc kubenswrapper[4754]: I0218 19:37:12.027119 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5thp\" (UniqueName: \"kubernetes.io/projected/0899bd7b-2289-4496-8521-c8dbea7874f7-kube-api-access-w5thp\") pod \"root-account-create-update-4zlg5\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:12 crc kubenswrapper[4754]: I0218 19:37:12.027607 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0899bd7b-2289-4496-8521-c8dbea7874f7-operator-scripts\") pod \"root-account-create-update-4zlg5\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:12 crc kubenswrapper[4754]: I0218 19:37:12.028615 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0899bd7b-2289-4496-8521-c8dbea7874f7-operator-scripts\") pod \"root-account-create-update-4zlg5\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:12 crc kubenswrapper[4754]: I0218 19:37:12.050876 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5thp\" (UniqueName: \"kubernetes.io/projected/0899bd7b-2289-4496-8521-c8dbea7874f7-kube-api-access-w5thp\") pod \"root-account-create-update-4zlg5\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:12 crc kubenswrapper[4754]: I0218 19:37:12.136022 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:12 crc kubenswrapper[4754]: I0218 19:37:12.657115 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4zlg5"] Feb 18 19:37:12 crc kubenswrapper[4754]: W0218 19:37:12.666624 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0899bd7b_2289_4496_8521_c8dbea7874f7.slice/crio-eec5cbe40dea7cc990569e85a0c0f95ee4e048d2258789cb6b5325cda9c12318 WatchSource:0}: Error finding container eec5cbe40dea7cc990569e85a0c0f95ee4e048d2258789cb6b5325cda9c12318: Status 404 returned error can't find the container with id eec5cbe40dea7cc990569e85a0c0f95ee4e048d2258789cb6b5325cda9c12318 Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.243521 4754 generic.go:334] "Generic (PLEG): container finished" podID="0899bd7b-2289-4496-8521-c8dbea7874f7" containerID="11b472fe52c282b49235c45ccd0db4e5e7237572c828da8a2f112fbd07e93737" exitCode=0 Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.243690 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zlg5" event={"ID":"0899bd7b-2289-4496-8521-c8dbea7874f7","Type":"ContainerDied","Data":"11b472fe52c282b49235c45ccd0db4e5e7237572c828da8a2f112fbd07e93737"} Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.244432 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zlg5" event={"ID":"0899bd7b-2289-4496-8521-c8dbea7874f7","Type":"ContainerStarted","Data":"eec5cbe40dea7cc990569e85a0c0f95ee4e048d2258789cb6b5325cda9c12318"} Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.247306 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"3f4d07b0ce125ee0d0c5cb33e10ee02632f3ee9c42deacefa35d8f45a5ff581c"} Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.247337 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"7fbdbf434c4bc41f831acb3fd7b4b081ef2f3a5da6890350c905bb53bd07e972"} Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.247348 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"7f08b489bee41352bb605f3605620a171befba3b4da6670b13cec3634e0312b9"} Feb 18 19:37:13 crc kubenswrapper[4754]: I0218 19:37:13.247359 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"f7fa5c0c2a6caa3e850cec4d307e6544ed0f23ae0c9086b98afac6dd292c020e"} Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.267190 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"71f262b978a8dabfb48b5a39e7566574d448247679023b50fb097fef56dbfb55"} Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.535559 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.575990 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0899bd7b-2289-4496-8521-c8dbea7874f7-operator-scripts\") pod \"0899bd7b-2289-4496-8521-c8dbea7874f7\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.576069 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5thp\" (UniqueName: \"kubernetes.io/projected/0899bd7b-2289-4496-8521-c8dbea7874f7-kube-api-access-w5thp\") pod \"0899bd7b-2289-4496-8521-c8dbea7874f7\" (UID: \"0899bd7b-2289-4496-8521-c8dbea7874f7\") " Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.577335 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0899bd7b-2289-4496-8521-c8dbea7874f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0899bd7b-2289-4496-8521-c8dbea7874f7" (UID: "0899bd7b-2289-4496-8521-c8dbea7874f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.614589 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0899bd7b-2289-4496-8521-c8dbea7874f7-kube-api-access-w5thp" (OuterVolumeSpecName: "kube-api-access-w5thp") pod "0899bd7b-2289-4496-8521-c8dbea7874f7" (UID: "0899bd7b-2289-4496-8521-c8dbea7874f7"). InnerVolumeSpecName "kube-api-access-w5thp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.678676 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0899bd7b-2289-4496-8521-c8dbea7874f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:14 crc kubenswrapper[4754]: I0218 19:37:14.678719 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5thp\" (UniqueName: \"kubernetes.io/projected/0899bd7b-2289-4496-8521-c8dbea7874f7-kube-api-access-w5thp\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:15 crc kubenswrapper[4754]: I0218 19:37:15.281038 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4zlg5" event={"ID":"0899bd7b-2289-4496-8521-c8dbea7874f7","Type":"ContainerDied","Data":"eec5cbe40dea7cc990569e85a0c0f95ee4e048d2258789cb6b5325cda9c12318"} Feb 18 19:37:15 crc kubenswrapper[4754]: I0218 19:37:15.281400 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec5cbe40dea7cc990569e85a0c0f95ee4e048d2258789cb6b5325cda9c12318" Feb 18 19:37:15 crc kubenswrapper[4754]: I0218 19:37:15.281473 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4zlg5" Feb 18 19:37:15 crc kubenswrapper[4754]: I0218 19:37:15.287974 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"8eace17d48052fcd356835f5047f3f1253c18deed47a466f41ad3e468ce05db3"} Feb 18 19:37:15 crc kubenswrapper[4754]: I0218 19:37:15.288052 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"a6307ba9e5d5f87f1d9b0bdeeeaad8faf4e0fe216d661109d8f45d5cbaf3584e"} Feb 18 19:37:15 crc kubenswrapper[4754]: I0218 19:37:15.288069 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"7705dfc2f56eb74c43a2def4b2adf3aa3aef4efacf7a1f420701cfe344a22a30"} Feb 18 19:37:16 crc kubenswrapper[4754]: I0218 19:37:16.302612 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"cf1266dfd859eb9c43570c55465523f115e3801022f2822fa676134b4bfd8b9c"} Feb 18 19:37:17 crc kubenswrapper[4754]: I0218 19:37:17.315979 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:17 crc kubenswrapper[4754]: I0218 19:37:17.319152 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:17 crc kubenswrapper[4754]: I0218 19:37:17.322134 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"3a6014a6e6579f4b2a561b497f66fb99a148f674223fc8ba9422065d105a98e1"} Feb 18 19:37:17 crc kubenswrapper[4754]: I0218 19:37:17.322197 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"75d8f53a6953dd6e278165423ccddf9430e79aac1c2de683fe266b320a48cc49"} Feb 18 19:37:17 crc kubenswrapper[4754]: I0218 19:37:17.322207 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"1cf844275e11e3122e5680266e83efb581c25b9dfde0004a355d57e7d7a1abf3"} Feb 18 19:37:17 crc kubenswrapper[4754]: I0218 19:37:17.322216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"dd3d2fac923445819fc2957f48624809046c42eb898f3607cca7699a7755293e"} Feb 18 19:37:18 crc kubenswrapper[4754]: I0218 19:37:18.356013 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"e1d366d1635120dcebe66b316a15218ed7d47fbc182640c4b33e253fc8494645"} Feb 18 19:37:18 crc kubenswrapper[4754]: I0218 19:37:18.358656 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:20 crc kubenswrapper[4754]: I0218 19:37:20.662426 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:37:20 crc kubenswrapper[4754]: I0218 19:37:20.728547 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:37:20 crc kubenswrapper[4754]: I0218 19:37:20.728876 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="prometheus" containerID="cri-o://8e1d7244df194c7c06cb685e0c720d145ede48a08d1e5df775ae2281d877c868" gracePeriod=600 Feb 18 19:37:20 crc kubenswrapper[4754]: I0218 19:37:20.729287 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="thanos-sidecar" containerID="cri-o://6c6c418bcb7997c093e81b6c423c9040e1f5d9dec8140fce85bd446cbd642a6a" gracePeriod=600 Feb 18 19:37:20 crc kubenswrapper[4754]: I0218 19:37:20.729438 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="config-reloader" containerID="cri-o://9c5896770be384d38ce1aa3dc0cc6e57ff321dc388b3752178617ca239c8d1bc" gracePeriod=600 Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.093289 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kw8rh"] Feb 18 19:37:21 crc kubenswrapper[4754]: E0218 19:37:21.093774 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0899bd7b-2289-4496-8521-c8dbea7874f7" containerName="mariadb-account-create-update" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.093794 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0899bd7b-2289-4496-8521-c8dbea7874f7" containerName="mariadb-account-create-update" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.094043 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0899bd7b-2289-4496-8521-c8dbea7874f7" containerName="mariadb-account-create-update" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.094869 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.117430 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kw8rh"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.219177 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-operator-scripts\") pod \"cinder-db-create-kw8rh\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.219368 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbpk4\" (UniqueName: \"kubernetes.io/projected/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-kube-api-access-kbpk4\") pod \"cinder-db-create-kw8rh\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.235036 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-wl8ql"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.236898 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.244116 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-z76fp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.247121 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.264136 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-wl8ql"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.321402 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-db-sync-config-data\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.321503 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a469-account-create-update-trbvp"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.321520 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbpk4\" (UniqueName: \"kubernetes.io/projected/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-kube-api-access-kbpk4\") pod \"cinder-db-create-kw8rh\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.322532 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-combined-ca-bundle\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.322700 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-operator-scripts\") pod \"cinder-db-create-kw8rh\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.322839 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.322893 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-config-data\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.322965 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6djg\" (UniqueName: \"kubernetes.io/projected/d046b6fd-1000-4f80-af20-d756adbab2ea-kube-api-access-q6djg\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.324392 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-operator-scripts\") pod \"cinder-db-create-kw8rh\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.328631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.346735 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a469-account-create-update-trbvp"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.406211 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-lxzvw"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.407955 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.409656 4754 generic.go:334] "Generic (PLEG): container finished" podID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerID="6c6c418bcb7997c093e81b6c423c9040e1f5d9dec8140fce85bd446cbd642a6a" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.409698 4754 generic.go:334] "Generic (PLEG): container finished" podID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerID="9c5896770be384d38ce1aa3dc0cc6e57ff321dc388b3752178617ca239c8d1bc" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.409707 4754 generic.go:334] "Generic (PLEG): container finished" podID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerID="8e1d7244df194c7c06cb685e0c720d145ede48a08d1e5df775ae2281d877c868" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.409734 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerDied","Data":"6c6c418bcb7997c093e81b6c423c9040e1f5d9dec8140fce85bd446cbd642a6a"} Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.409785 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerDied","Data":"9c5896770be384d38ce1aa3dc0cc6e57ff321dc388b3752178617ca239c8d1bc"} Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.409797 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerDied","Data":"8e1d7244df194c7c06cb685e0c720d145ede48a08d1e5df775ae2281d877c868"} Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.425329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa838e2c-4d5f-4820-ae94-27d460ee1664-operator-scripts\") pod \"cinder-a469-account-create-update-trbvp\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.425432 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzzr\" (UniqueName: \"kubernetes.io/projected/fa838e2c-4d5f-4820-ae94-27d460ee1664-kube-api-access-rfzzr\") pod \"cinder-a469-account-create-update-trbvp\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.425501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-db-sync-config-data\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.425559 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-combined-ca-bundle\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.425610 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-config-data\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.425635 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6djg\" (UniqueName: \"kubernetes.io/projected/d046b6fd-1000-4f80-af20-d756adbab2ea-kube-api-access-q6djg\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.429293 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-db-sync-config-data\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.429633 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbpk4\" (UniqueName: \"kubernetes.io/projected/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-kube-api-access-kbpk4\") pod \"cinder-db-create-kw8rh\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.430012 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lxzvw"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.436250 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-combined-ca-bundle\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.443409 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.447394 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-config-data\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.458905 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6djg\" (UniqueName: \"kubernetes.io/projected/d046b6fd-1000-4f80-af20-d756adbab2ea-kube-api-access-q6djg\") pod \"watcher-db-sync-wl8ql\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.507628 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-z7kq5"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.509530 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.525297 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c557-account-create-update-qxlgj"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.527418 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.527934 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa838e2c-4d5f-4820-ae94-27d460ee1664-operator-scripts\") pod \"cinder-a469-account-create-update-trbvp\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.527993 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzzr\" (UniqueName: \"kubernetes.io/projected/fa838e2c-4d5f-4820-ae94-27d460ee1664-kube-api-access-rfzzr\") pod \"cinder-a469-account-create-update-trbvp\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.528071 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de98594-db65-4550-bd48-dddb366bc4de-operator-scripts\") pod \"barbican-db-create-lxzvw\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.528111 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmtt\" (UniqueName: \"kubernetes.io/projected/4de98594-db65-4550-bd48-dddb366bc4de-kube-api-access-lgmtt\") pod \"barbican-db-create-lxzvw\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.529333 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa838e2c-4d5f-4820-ae94-27d460ee1664-operator-scripts\") pod \"cinder-a469-account-create-update-trbvp\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.530792 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.533758 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-z7kq5"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.543285 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c557-account-create-update-qxlgj"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.559497 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzzr\" (UniqueName: \"kubernetes.io/projected/fa838e2c-4d5f-4820-ae94-27d460ee1664-kube-api-access-rfzzr\") pod \"cinder-a469-account-create-update-trbvp\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.615366 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.627585 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a073-account-create-update-wmjfz"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.628771 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.630762 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ce92b2-1e49-4847-a38e-7322e4089b05-operator-scripts\") pod \"neutron-db-create-z7kq5\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.630801 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tsm\" (UniqueName: \"kubernetes.io/projected/d3ce92b2-1e49-4847-a38e-7322e4089b05-kube-api-access-l4tsm\") pod \"neutron-db-create-z7kq5\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.630855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjppc\" (UniqueName: \"kubernetes.io/projected/e62a77df-9678-4173-bd87-a3451220eb34-kube-api-access-rjppc\") pod \"barbican-c557-account-create-update-qxlgj\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.630896 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62a77df-9678-4173-bd87-a3451220eb34-operator-scripts\") pod \"barbican-c557-account-create-update-qxlgj\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.630940 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de98594-db65-4550-bd48-dddb366bc4de-operator-scripts\") pod \"barbican-db-create-lxzvw\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.630974 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgmtt\" (UniqueName: \"kubernetes.io/projected/4de98594-db65-4550-bd48-dddb366bc4de-kube-api-access-lgmtt\") pod \"barbican-db-create-lxzvw\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.631931 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de98594-db65-4550-bd48-dddb366bc4de-operator-scripts\") pod \"barbican-db-create-lxzvw\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.635771 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.642721 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tkp99"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.644289 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.651749 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.651980 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.652273 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gvkx6" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.652406 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.674672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgmtt\" (UniqueName: \"kubernetes.io/projected/4de98594-db65-4550-bd48-dddb366bc4de-kube-api-access-lgmtt\") pod \"barbican-db-create-lxzvw\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.675024 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a073-account-create-update-wmjfz"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.688032 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tkp99"] Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.709304 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732317 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjppc\" (UniqueName: \"kubernetes.io/projected/e62a77df-9678-4173-bd87-a3451220eb34-kube-api-access-rjppc\") pod \"barbican-c557-account-create-update-qxlgj\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732400 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62a77df-9678-4173-bd87-a3451220eb34-operator-scripts\") pod \"barbican-c557-account-create-update-qxlgj\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732436 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdrl\" (UniqueName: \"kubernetes.io/projected/ef3a989f-f82e-4062-8b49-3f4cb7959b73-kube-api-access-tfdrl\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732495 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88mg\" (UniqueName: \"kubernetes.io/projected/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-kube-api-access-q88mg\") pod \"neutron-a073-account-create-update-wmjfz\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732558 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ce92b2-1e49-4847-a38e-7322e4089b05-operator-scripts\") pod \"neutron-db-create-z7kq5\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732589 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-combined-ca-bundle\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4tsm\" (UniqueName: \"kubernetes.io/projected/d3ce92b2-1e49-4847-a38e-7322e4089b05-kube-api-access-l4tsm\") pod \"neutron-db-create-z7kq5\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732667 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-operator-scripts\") pod \"neutron-a073-account-create-update-wmjfz\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.732698 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-config-data\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.733646 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ce92b2-1e49-4847-a38e-7322e4089b05-operator-scripts\") pod \"neutron-db-create-z7kq5\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.733727 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62a77df-9678-4173-bd87-a3451220eb34-operator-scripts\") pod \"barbican-c557-account-create-update-qxlgj\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.757001 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjppc\" (UniqueName: \"kubernetes.io/projected/e62a77df-9678-4173-bd87-a3451220eb34-kube-api-access-rjppc\") pod \"barbican-c557-account-create-update-qxlgj\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.758348 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4tsm\" (UniqueName: \"kubernetes.io/projected/d3ce92b2-1e49-4847-a38e-7322e4089b05-kube-api-access-l4tsm\") pod \"neutron-db-create-z7kq5\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.817774 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.834646 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-operator-scripts\") pod \"neutron-a073-account-create-update-wmjfz\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.834729 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-config-data\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.834823 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdrl\" (UniqueName: \"kubernetes.io/projected/ef3a989f-f82e-4062-8b49-3f4cb7959b73-kube-api-access-tfdrl\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.834870 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q88mg\" (UniqueName: \"kubernetes.io/projected/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-kube-api-access-q88mg\") pod \"neutron-a073-account-create-update-wmjfz\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.834926 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-combined-ca-bundle\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.835648 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-operator-scripts\") pod \"neutron-a073-account-create-update-wmjfz\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.840062 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-combined-ca-bundle\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.840839 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-config-data\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.849741 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.853776 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdrl\" (UniqueName: \"kubernetes.io/projected/ef3a989f-f82e-4062-8b49-3f4cb7959b73-kube-api-access-tfdrl\") pod \"keystone-db-sync-tkp99\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.857987 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q88mg\" (UniqueName: \"kubernetes.io/projected/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-kube-api-access-q88mg\") pod \"neutron-a073-account-create-update-wmjfz\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:21 crc kubenswrapper[4754]: I0218 19:37:21.859997 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:22 crc kubenswrapper[4754]: I0218 19:37:22.000806 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:22 crc kubenswrapper[4754]: I0218 19:37:22.015859 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:22 crc kubenswrapper[4754]: I0218 19:37:22.316665 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.110:9090/-/ready\": dial tcp 10.217.0.110:9090: connect: connection refused" Feb 18 19:37:26 crc kubenswrapper[4754]: E0218 19:37:26.668738 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 18 19:37:26 crc kubenswrapper[4754]: E0218 19:37:26.669940 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9k86b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-f6s4l_openstack(0db8affb-2742-46e4-a19d-a907e5c6d28d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:37:26 crc kubenswrapper[4754]: E0218 19:37:26.671243 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-f6s4l" podUID="0db8affb-2742-46e4-a19d-a907e5c6d28d" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.084696 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.252410 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a469-account-create-update-trbvp"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.259561 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-web-config\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.259751 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.259835 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwkg\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-kube-api-access-qmwkg\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.259866 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-config\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.259917 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-thanos-prometheus-http-client-file\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.260011 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-2\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.260033 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-tls-assets\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.260057 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b36185b7-72d3-4f98-9928-e1c4c27594fa-config-out\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.260100 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-1\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.260150 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-0\") pod \"b36185b7-72d3-4f98-9928-e1c4c27594fa\" (UID: \"b36185b7-72d3-4f98-9928-e1c4c27594fa\") " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.261181 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.266419 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.266713 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.271704 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.274221 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.281297 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36185b7-72d3-4f98-9928-e1c4c27594fa-config-out" (OuterVolumeSpecName: "config-out") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.284113 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-kube-api-access-qmwkg" (OuterVolumeSpecName: "kube-api-access-qmwkg") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "kube-api-access-qmwkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.298410 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-config" (OuterVolumeSpecName: "config") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.308217 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.339279 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-web-config" (OuterVolumeSpecName: "web-config") pod "b36185b7-72d3-4f98-9928-e1c4c27594fa" (UID: "b36185b7-72d3-4f98-9928-e1c4c27594fa"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363401 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") on node \"crc\" " Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363730 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwkg\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-kube-api-access-qmwkg\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363740 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363758 4754 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363771 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363781 4754 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b36185b7-72d3-4f98-9928-e1c4c27594fa-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363791 4754 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b36185b7-72d3-4f98-9928-e1c4c27594fa-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363801 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363812 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b36185b7-72d3-4f98-9928-e1c4c27594fa-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.363825 4754 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b36185b7-72d3-4f98-9928-e1c4c27594fa-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.403512 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.403929 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6") on node "crc" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.465590 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.485277 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b36185b7-72d3-4f98-9928-e1c4c27594fa","Type":"ContainerDied","Data":"7971fd3605bd5c96ff4596ce9da6b9d90562338378cc199193e22388f3bcd13d"} Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.485337 4754 scope.go:117] "RemoveContainer" containerID="6c6c418bcb7997c093e81b6c423c9040e1f5d9dec8140fce85bd446cbd642a6a" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.485515 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: W0218 19:37:27.493773 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ce92b2_1e49_4847_a38e_7322e4089b05.slice/crio-c4e50aa34d0d7d9f0ebf058837f8e9ad3c421a9646bad6781ecd5b924b31764e WatchSource:0}: Error finding container c4e50aa34d0d7d9f0ebf058837f8e9ad3c421a9646bad6781ecd5b924b31764e: Status 404 returned error can't find the container with id c4e50aa34d0d7d9f0ebf058837f8e9ad3c421a9646bad6781ecd5b924b31764e Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.495424 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a073-account-create-update-wmjfz"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.501612 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8ecc731f-ea98-4469-be08-1a12088339b5","Type":"ContainerStarted","Data":"f23480274efbe77cc01c43d8557c74dc236ccb5af8e87a0058d8a6b8a9fae8e1"} Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.507209 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a469-account-create-update-trbvp" event={"ID":"fa838e2c-4d5f-4820-ae94-27d460ee1664","Type":"ContainerStarted","Data":"e32104b4c491066b49b2dd2f3aa619dc2d8d10b9ad2bea55c30cc91af575c164"} Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.518821 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-z7kq5"] Feb 18 19:37:27 crc kubenswrapper[4754]: E0218 19:37:27.523095 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-f6s4l" podUID="0db8affb-2742-46e4-a19d-a907e5c6d28d" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.553442 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.937656759 podStartE2EDuration="51.553415286s" podCreationTimestamp="2026-02-18 19:36:36 +0000 UTC" firstStartedPulling="2026-02-18 19:37:10.326823801 +0000 UTC m=+1132.777236597" lastFinishedPulling="2026-02-18 19:37:15.942582318 +0000 UTC m=+1138.392995124" observedRunningTime="2026-02-18 19:37:27.535804112 +0000 UTC m=+1149.986216918" watchObservedRunningTime="2026-02-18 19:37:27.553415286 +0000 UTC m=+1150.003828092" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.553795 4754 scope.go:117] "RemoveContainer" containerID="9c5896770be384d38ce1aa3dc0cc6e57ff321dc388b3752178617ca239c8d1bc" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.599156 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.619922 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.647885 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a469-account-create-update-trbvp" podStartSLOduration=6.647855412 podStartE2EDuration="6.647855412s" podCreationTimestamp="2026-02-18 19:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:27.645703235 +0000 UTC m=+1150.096116041" watchObservedRunningTime="2026-02-18 19:37:27.647855412 +0000 UTC m=+1150.098268208" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.656841 4754 scope.go:117] "RemoveContainer" containerID="8e1d7244df194c7c06cb685e0c720d145ede48a08d1e5df775ae2281d877c868" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674052 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:37:27 crc kubenswrapper[4754]: E0218 19:37:27.674532 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="thanos-sidecar" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674547 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="thanos-sidecar" Feb 18 19:37:27 crc kubenswrapper[4754]: E0218 19:37:27.674586 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="init-config-reloader" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674593 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="init-config-reloader" Feb 18 19:37:27 crc kubenswrapper[4754]: E0218 19:37:27.674601 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="config-reloader" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674608 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="config-reloader" Feb 18 19:37:27 crc kubenswrapper[4754]: E0218 19:37:27.674622 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="prometheus" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674628 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="prometheus" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674808 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="thanos-sidecar" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674823 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="prometheus" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.674838 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" containerName="config-reloader" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.706520 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.707493 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.709853 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712105 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712521 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712697 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712756 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jgskl" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712816 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712900 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.712994 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.719361 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772129 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772192 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772231 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772266 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772287 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772306 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772356 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9xx\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-kube-api-access-bf9xx\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772395 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772431 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772553 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.772912 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.773012 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.778422 4754 scope.go:117] "RemoveContainer" containerID="5a02a86502889e66b82b41c6c17ab028c2bfb4975fdc645fbe52fcbc950c3263" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875407 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875467 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875494 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875518 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875564 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875606 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875644 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875663 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875690 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9xx\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-kube-api-access-bf9xx\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875729 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875758 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.875782 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.879469 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-fk8xw"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.881340 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.882814 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.883320 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.884978 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.887024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.888069 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.889224 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.889577 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.889656 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.895820 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.897986 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-fk8xw"] Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.898629 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.901192 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.901403 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.901445 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/81ceb4b204ea14622a6725e3273bbd9693392e71b58bdedf5b3f6ad4f339a7ba/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.926215 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.926424 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9xx\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-kube-api-access-bf9xx\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:27 crc kubenswrapper[4754]: I0218 19:37:27.959934 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kw8rh"] Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.059028 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tkp99"] Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.067042 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-wl8ql"] Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.091674 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.091800 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.091885 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-config\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.091929 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.092034 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.092077 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7fs\" (UniqueName: \"kubernetes.io/projected/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-kube-api-access-hh7fs\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.117337 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-lxzvw"] Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.152844 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.158039 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c557-account-create-update-qxlgj"] Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.201602 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.201906 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7fs\" (UniqueName: \"kubernetes.io/projected/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-kube-api-access-hh7fs\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.202389 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.202519 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.202643 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-config\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.202747 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.203437 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.204174 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.204852 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.205246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-config\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.212567 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.234462 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36185b7-72d3-4f98-9928-e1c4c27594fa" path="/var/lib/kubelet/pods/b36185b7-72d3-4f98-9928-e1c4c27594fa/volumes" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.275188 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7fs\" (UniqueName: \"kubernetes.io/projected/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-kube-api-access-hh7fs\") pod \"dnsmasq-dns-5c79d794d7-fk8xw\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.393334 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.404824 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.525950 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tkp99" event={"ID":"ef3a989f-f82e-4062-8b49-3f4cb7959b73","Type":"ContainerStarted","Data":"9b3765b7261f3148e4b30ac4c1f59868950d565a74f22efa4bd65f3904844806"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.528687 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lxzvw" event={"ID":"4de98594-db65-4550-bd48-dddb366bc4de","Type":"ContainerStarted","Data":"5c83ff57dc55314f7aa9680971c3f83e552cfc233c6dd47b6066f7a00e978c19"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.531957 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a073-account-create-update-wmjfz" event={"ID":"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1","Type":"ContainerStarted","Data":"d980310407ce084ecb2e43b4d50a473957d8610e0cc11649fb05f847ddce83c7"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.531988 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a073-account-create-update-wmjfz" event={"ID":"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1","Type":"ContainerStarted","Data":"4481a71f64681fe6b21fd9b3c3a9a47113b50b60a78108e404616baf3d459c37"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.543480 4754 generic.go:334] "Generic (PLEG): container finished" podID="fa838e2c-4d5f-4820-ae94-27d460ee1664" containerID="3db40c649da737a2917b3da1b1fc3ffc0740b3e3b333ce052a8bb87c3775a01c" exitCode=0 Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.543584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a469-account-create-update-trbvp" event={"ID":"fa838e2c-4d5f-4820-ae94-27d460ee1664","Type":"ContainerDied","Data":"3db40c649da737a2917b3da1b1fc3ffc0740b3e3b333ce052a8bb87c3775a01c"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.553884 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c557-account-create-update-qxlgj" event={"ID":"e62a77df-9678-4173-bd87-a3451220eb34","Type":"ContainerStarted","Data":"25ebfc71295d680fd35f23a425ce60a26e64e322d4bf0cb2cf7de3ee43793d42"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.563614 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a073-account-create-update-wmjfz" podStartSLOduration=7.563588038 podStartE2EDuration="7.563588038s" podCreationTimestamp="2026-02-18 19:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:28.550575317 +0000 UTC m=+1151.000988113" watchObservedRunningTime="2026-02-18 19:37:28.563588038 +0000 UTC m=+1151.014000834" Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.566475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wl8ql" event={"ID":"d046b6fd-1000-4f80-af20-d756adbab2ea","Type":"ContainerStarted","Data":"e64df57d4ef3cb3cdba8fb6f1ca5ba9fa8bef07362b4cd35bd5fa4505eae3eea"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.571278 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kw8rh" event={"ID":"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d","Type":"ContainerStarted","Data":"b15f442b569b638d99390c825c589e579538a3e54547053ec812bf472269805a"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.599520 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-z7kq5" event={"ID":"d3ce92b2-1e49-4847-a38e-7322e4089b05","Type":"ContainerStarted","Data":"7739c4f42187fa9818c14c2865f659911e40ca4500f6382acbb988497d975e86"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.599573 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-z7kq5" event={"ID":"d3ce92b2-1e49-4847-a38e-7322e4089b05","Type":"ContainerStarted","Data":"c4e50aa34d0d7d9f0ebf058837f8e9ad3c421a9646bad6781ecd5b924b31764e"} Feb 18 19:37:28 crc kubenswrapper[4754]: I0218 19:37:28.639821 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-z7kq5" podStartSLOduration=7.639790501 podStartE2EDuration="7.639790501s" podCreationTimestamp="2026-02-18 19:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:28.626707757 +0000 UTC m=+1151.077120553" watchObservedRunningTime="2026-02-18 19:37:28.639790501 +0000 UTC m=+1151.090203297" Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.027478 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-fk8xw"] Feb 18 19:37:29 crc kubenswrapper[4754]: W0218 19:37:29.087780 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b107ba5_e1b6_4bb5_aaa9_48f20858ff1c.slice/crio-19831b62002c20b0d2403135168700bec32db444030dfab2bc7d0f4138280b79 WatchSource:0}: Error finding container 19831b62002c20b0d2403135168700bec32db444030dfab2bc7d0f4138280b79: Status 404 returned error can't find the container with id 19831b62002c20b0d2403135168700bec32db444030dfab2bc7d0f4138280b79 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.143973 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:37:29 crc kubenswrapper[4754]: W0218 19:37:29.165313 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d8175e_d7c0_49e6_bce1_770d2dac9b74.slice/crio-fa2bdedc70a58100b582f1e211e8a10d7182c18490a4112552a7bbadbf2a9aa9 WatchSource:0}: Error finding container fa2bdedc70a58100b582f1e211e8a10d7182c18490a4112552a7bbadbf2a9aa9: Status 404 returned error can't find the container with id fa2bdedc70a58100b582f1e211e8a10d7182c18490a4112552a7bbadbf2a9aa9 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.626938 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerStarted","Data":"fa2bdedc70a58100b582f1e211e8a10d7182c18490a4112552a7bbadbf2a9aa9"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.630358 4754 generic.go:334] "Generic (PLEG): container finished" podID="e62a77df-9678-4173-bd87-a3451220eb34" containerID="27a37f9793bd652c507bc5cce74d2f502dadde1a98e33cf67faedd64c96441d8" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.630466 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c557-account-create-update-qxlgj" event={"ID":"e62a77df-9678-4173-bd87-a3451220eb34","Type":"ContainerDied","Data":"27a37f9793bd652c507bc5cce74d2f502dadde1a98e33cf67faedd64c96441d8"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.637221 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lxzvw" event={"ID":"4de98594-db65-4550-bd48-dddb366bc4de","Type":"ContainerDied","Data":"28f799587f9bb06f73422208d8a51d499fee19b45d1b07447a1ae47de06dbc43"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.637099 4754 generic.go:334] "Generic (PLEG): container finished" podID="4de98594-db65-4550-bd48-dddb366bc4de" containerID="28f799587f9bb06f73422208d8a51d499fee19b45d1b07447a1ae47de06dbc43" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.640066 4754 generic.go:334] "Generic (PLEG): container finished" podID="fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" containerID="4a2908a06b8204acf4446d82dad166dcabdc1f7797d7dc75136db58ef2e16ccc" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.640176 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kw8rh" event={"ID":"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d","Type":"ContainerDied","Data":"4a2908a06b8204acf4446d82dad166dcabdc1f7797d7dc75136db58ef2e16ccc"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.643575 4754 generic.go:334] "Generic (PLEG): container finished" podID="1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" containerID="d980310407ce084ecb2e43b4d50a473957d8610e0cc11649fb05f847ddce83c7" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.643633 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a073-account-create-update-wmjfz" event={"ID":"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1","Type":"ContainerDied","Data":"d980310407ce084ecb2e43b4d50a473957d8610e0cc11649fb05f847ddce83c7"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.653822 4754 generic.go:334] "Generic (PLEG): container finished" podID="d3ce92b2-1e49-4847-a38e-7322e4089b05" containerID="7739c4f42187fa9818c14c2865f659911e40ca4500f6382acbb988497d975e86" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.653903 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-z7kq5" event={"ID":"d3ce92b2-1e49-4847-a38e-7322e4089b05","Type":"ContainerDied","Data":"7739c4f42187fa9818c14c2865f659911e40ca4500f6382acbb988497d975e86"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.656133 4754 generic.go:334] "Generic (PLEG): container finished" podID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerID="42ecf3304606833ca11434c61a2b5c09933654690e018153c84cb8ee83d97cfe" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.656610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" event={"ID":"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c","Type":"ContainerDied","Data":"42ecf3304606833ca11434c61a2b5c09933654690e018153c84cb8ee83d97cfe"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.656660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" event={"ID":"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c","Type":"ContainerStarted","Data":"19831b62002c20b0d2403135168700bec32db444030dfab2bc7d0f4138280b79"} Feb 18 19:37:29 crc kubenswrapper[4754]: I0218 19:37:29.977020 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.074066 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfzzr\" (UniqueName: \"kubernetes.io/projected/fa838e2c-4d5f-4820-ae94-27d460ee1664-kube-api-access-rfzzr\") pod \"fa838e2c-4d5f-4820-ae94-27d460ee1664\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.074164 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa838e2c-4d5f-4820-ae94-27d460ee1664-operator-scripts\") pod \"fa838e2c-4d5f-4820-ae94-27d460ee1664\" (UID: \"fa838e2c-4d5f-4820-ae94-27d460ee1664\") " Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.076852 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa838e2c-4d5f-4820-ae94-27d460ee1664-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa838e2c-4d5f-4820-ae94-27d460ee1664" (UID: "fa838e2c-4d5f-4820-ae94-27d460ee1664"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.080954 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa838e2c-4d5f-4820-ae94-27d460ee1664-kube-api-access-rfzzr" (OuterVolumeSpecName: "kube-api-access-rfzzr") pod "fa838e2c-4d5f-4820-ae94-27d460ee1664" (UID: "fa838e2c-4d5f-4820-ae94-27d460ee1664"). InnerVolumeSpecName "kube-api-access-rfzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.176082 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfzzr\" (UniqueName: \"kubernetes.io/projected/fa838e2c-4d5f-4820-ae94-27d460ee1664-kube-api-access-rfzzr\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.176218 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa838e2c-4d5f-4820-ae94-27d460ee1664-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.669299 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a469-account-create-update-trbvp" event={"ID":"fa838e2c-4d5f-4820-ae94-27d460ee1664","Type":"ContainerDied","Data":"e32104b4c491066b49b2dd2f3aa619dc2d8d10b9ad2bea55c30cc91af575c164"} Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.669345 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32104b4c491066b49b2dd2f3aa619dc2d8d10b9ad2bea55c30cc91af575c164" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.669358 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a469-account-create-update-trbvp" Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.671477 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" event={"ID":"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c","Type":"ContainerStarted","Data":"71ffccd940dea48b446679dc068d8116f6fcd39ccb2efee571b88e7d3f37c452"} Feb 18 19:37:30 crc kubenswrapper[4754]: I0218 19:37:30.698671 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" podStartSLOduration=3.698626785 podStartE2EDuration="3.698626785s" podCreationTimestamp="2026-02-18 19:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:30.696444538 +0000 UTC m=+1153.146857354" watchObservedRunningTime="2026-02-18 19:37:30.698626785 +0000 UTC m=+1153.149039581" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.615921 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.623392 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.648100 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.665849 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.669892 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.702077 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a073-account-create-update-wmjfz" event={"ID":"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1","Type":"ContainerDied","Data":"4481a71f64681fe6b21fd9b3c3a9a47113b50b60a78108e404616baf3d459c37"} Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.702130 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4481a71f64681fe6b21fd9b3c3a9a47113b50b60a78108e404616baf3d459c37" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.702237 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a073-account-create-update-wmjfz" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.704541 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-z7kq5" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.704538 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-z7kq5" event={"ID":"d3ce92b2-1e49-4847-a38e-7322e4089b05","Type":"ContainerDied","Data":"c4e50aa34d0d7d9f0ebf058837f8e9ad3c421a9646bad6781ecd5b924b31764e"} Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.704665 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e50aa34d0d7d9f0ebf058837f8e9ad3c421a9646bad6781ecd5b924b31764e" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.706089 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c557-account-create-update-qxlgj" event={"ID":"e62a77df-9678-4173-bd87-a3451220eb34","Type":"ContainerDied","Data":"25ebfc71295d680fd35f23a425ce60a26e64e322d4bf0cb2cf7de3ee43793d42"} Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.706126 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ebfc71295d680fd35f23a425ce60a26e64e322d4bf0cb2cf7de3ee43793d42" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.706104 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c557-account-create-update-qxlgj" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.707687 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-lxzvw" event={"ID":"4de98594-db65-4550-bd48-dddb366bc4de","Type":"ContainerDied","Data":"5c83ff57dc55314f7aa9680971c3f83e552cfc233c6dd47b6066f7a00e978c19"} Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.707710 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c83ff57dc55314f7aa9680971c3f83e552cfc233c6dd47b6066f7a00e978c19" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.707726 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-lxzvw" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.710037 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kw8rh" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.710049 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kw8rh" event={"ID":"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d","Type":"ContainerDied","Data":"b15f442b569b638d99390c825c589e579538a3e54547053ec812bf472269805a"} Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.710120 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15f442b569b638d99390c825c589e579538a3e54547053ec812bf472269805a" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.710197 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.715685 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de98594-db65-4550-bd48-dddb366bc4de-operator-scripts\") pod \"4de98594-db65-4550-bd48-dddb366bc4de\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.715769 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ce92b2-1e49-4847-a38e-7322e4089b05-operator-scripts\") pod \"d3ce92b2-1e49-4847-a38e-7322e4089b05\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.715861 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4tsm\" (UniqueName: \"kubernetes.io/projected/d3ce92b2-1e49-4847-a38e-7322e4089b05-kube-api-access-l4tsm\") pod \"d3ce92b2-1e49-4847-a38e-7322e4089b05\" (UID: \"d3ce92b2-1e49-4847-a38e-7322e4089b05\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.716100 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgmtt\" (UniqueName: \"kubernetes.io/projected/4de98594-db65-4550-bd48-dddb366bc4de-kube-api-access-lgmtt\") pod \"4de98594-db65-4550-bd48-dddb366bc4de\" (UID: \"4de98594-db65-4550-bd48-dddb366bc4de\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.716790 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de98594-db65-4550-bd48-dddb366bc4de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4de98594-db65-4550-bd48-dddb366bc4de" (UID: "4de98594-db65-4550-bd48-dddb366bc4de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.716913 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de98594-db65-4550-bd48-dddb366bc4de-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.721014 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ce92b2-1e49-4847-a38e-7322e4089b05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3ce92b2-1e49-4847-a38e-7322e4089b05" (UID: "d3ce92b2-1e49-4847-a38e-7322e4089b05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.781787 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de98594-db65-4550-bd48-dddb366bc4de-kube-api-access-lgmtt" (OuterVolumeSpecName: "kube-api-access-lgmtt") pod "4de98594-db65-4550-bd48-dddb366bc4de" (UID: "4de98594-db65-4550-bd48-dddb366bc4de"). InnerVolumeSpecName "kube-api-access-lgmtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.781858 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ce92b2-1e49-4847-a38e-7322e4089b05-kube-api-access-l4tsm" (OuterVolumeSpecName: "kube-api-access-l4tsm") pod "d3ce92b2-1e49-4847-a38e-7322e4089b05" (UID: "d3ce92b2-1e49-4847-a38e-7322e4089b05"). InnerVolumeSpecName "kube-api-access-l4tsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.817937 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-operator-scripts\") pod \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818017 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62a77df-9678-4173-bd87-a3451220eb34-operator-scripts\") pod \"e62a77df-9678-4173-bd87-a3451220eb34\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818075 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjppc\" (UniqueName: \"kubernetes.io/projected/e62a77df-9678-4173-bd87-a3451220eb34-kube-api-access-rjppc\") pod \"e62a77df-9678-4173-bd87-a3451220eb34\" (UID: \"e62a77df-9678-4173-bd87-a3451220eb34\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818096 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q88mg\" (UniqueName: \"kubernetes.io/projected/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-kube-api-access-q88mg\") pod \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\" (UID: \"1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818126 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-operator-scripts\") pod \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818234 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbpk4\" (UniqueName: \"kubernetes.io/projected/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-kube-api-access-kbpk4\") pod \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\" (UID: \"fe7d1a77-a63f-475f-9ffb-8ce51ab1689d\") " Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818554 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" (UID: "1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.818927 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62a77df-9678-4173-bd87-a3451220eb34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62a77df-9678-4173-bd87-a3451220eb34" (UID: "e62a77df-9678-4173-bd87-a3451220eb34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.819461 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgmtt\" (UniqueName: \"kubernetes.io/projected/4de98594-db65-4550-bd48-dddb366bc4de-kube-api-access-lgmtt\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.819499 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.819513 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3ce92b2-1e49-4847-a38e-7322e4089b05-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.819523 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62a77df-9678-4173-bd87-a3451220eb34-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.819533 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4tsm\" (UniqueName: \"kubernetes.io/projected/d3ce92b2-1e49-4847-a38e-7322e4089b05-kube-api-access-l4tsm\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.820843 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" (UID: "fe7d1a77-a63f-475f-9ffb-8ce51ab1689d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.824699 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-kube-api-access-kbpk4" (OuterVolumeSpecName: "kube-api-access-kbpk4") pod "fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" (UID: "fe7d1a77-a63f-475f-9ffb-8ce51ab1689d"). InnerVolumeSpecName "kube-api-access-kbpk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.825098 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62a77df-9678-4173-bd87-a3451220eb34-kube-api-access-rjppc" (OuterVolumeSpecName: "kube-api-access-rjppc") pod "e62a77df-9678-4173-bd87-a3451220eb34" (UID: "e62a77df-9678-4173-bd87-a3451220eb34"). InnerVolumeSpecName "kube-api-access-rjppc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.830493 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-kube-api-access-q88mg" (OuterVolumeSpecName: "kube-api-access-q88mg") pod "1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" (UID: "1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1"). InnerVolumeSpecName "kube-api-access-q88mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.921159 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjppc\" (UniqueName: \"kubernetes.io/projected/e62a77df-9678-4173-bd87-a3451220eb34-kube-api-access-rjppc\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.921508 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q88mg\" (UniqueName: \"kubernetes.io/projected/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1-kube-api-access-q88mg\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.921523 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:32 crc kubenswrapper[4754]: I0218 19:37:31.921539 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbpk4\" (UniqueName: \"kubernetes.io/projected/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d-kube-api-access-kbpk4\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:33 crc kubenswrapper[4754]: I0218 19:37:33.744898 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerStarted","Data":"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14"} Feb 18 19:37:38 crc kubenswrapper[4754]: I0218 19:37:38.396214 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:37:38 crc kubenswrapper[4754]: I0218 19:37:38.467755 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bq2qj"] Feb 18 19:37:38 crc kubenswrapper[4754]: I0218 19:37:38.468072 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="dnsmasq-dns" containerID="cri-o://77c1e757de7b8d03f44bccf28e28400b6dacf6fd2115993599d8dd41011789e1" gracePeriod=10 Feb 18 19:37:38 crc kubenswrapper[4754]: I0218 19:37:38.798890 4754 generic.go:334] "Generic (PLEG): container finished" podID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerID="77c1e757de7b8d03f44bccf28e28400b6dacf6fd2115993599d8dd41011789e1" exitCode=0 Feb 18 19:37:38 crc kubenswrapper[4754]: I0218 19:37:38.798960 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" event={"ID":"fba4ef7f-c55c-44e4-8213-6a900266eb2f","Type":"ContainerDied","Data":"77c1e757de7b8d03f44bccf28e28400b6dacf6fd2115993599d8dd41011789e1"} Feb 18 19:37:40 crc kubenswrapper[4754]: I0218 19:37:40.820886 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerID="783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14" exitCode=0 Feb 18 19:37:40 crc kubenswrapper[4754]: I0218 19:37:40.821104 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerDied","Data":"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14"} Feb 18 19:37:46 crc kubenswrapper[4754]: I0218 19:37:46.538220 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.893955 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.930926 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" event={"ID":"fba4ef7f-c55c-44e4-8213-6a900266eb2f","Type":"ContainerDied","Data":"d5ad1f872c18317c8c0628ff7b85ca58db72a18e449d0fe39ea041513aa3eaca"} Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.931017 4754 scope.go:117] "RemoveContainer" containerID="77c1e757de7b8d03f44bccf28e28400b6dacf6fd2115993599d8dd41011789e1" Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.931026 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.985069 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9lss\" (UniqueName: \"kubernetes.io/projected/fba4ef7f-c55c-44e4-8213-6a900266eb2f-kube-api-access-p9lss\") pod \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.985185 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-dns-svc\") pod \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.985302 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-sb\") pod \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.985423 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-nb\") pod \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " Feb 18 19:37:47 crc kubenswrapper[4754]: I0218 19:37:47.985485 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-config\") pod \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\" (UID: \"fba4ef7f-c55c-44e4-8213-6a900266eb2f\") " Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.003203 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba4ef7f-c55c-44e4-8213-6a900266eb2f-kube-api-access-p9lss" (OuterVolumeSpecName: "kube-api-access-p9lss") pod "fba4ef7f-c55c-44e4-8213-6a900266eb2f" (UID: "fba4ef7f-c55c-44e4-8213-6a900266eb2f"). InnerVolumeSpecName "kube-api-access-p9lss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.036583 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fba4ef7f-c55c-44e4-8213-6a900266eb2f" (UID: "fba4ef7f-c55c-44e4-8213-6a900266eb2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.039883 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fba4ef7f-c55c-44e4-8213-6a900266eb2f" (UID: "fba4ef7f-c55c-44e4-8213-6a900266eb2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.044880 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-config" (OuterVolumeSpecName: "config") pod "fba4ef7f-c55c-44e4-8213-6a900266eb2f" (UID: "fba4ef7f-c55c-44e4-8213-6a900266eb2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.045684 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fba4ef7f-c55c-44e4-8213-6a900266eb2f" (UID: "fba4ef7f-c55c-44e4-8213-6a900266eb2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.087744 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.087793 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9lss\" (UniqueName: \"kubernetes.io/projected/fba4ef7f-c55c-44e4-8213-6a900266eb2f-kube-api-access-p9lss\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.087807 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.087818 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.087831 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fba4ef7f-c55c-44e4-8213-6a900266eb2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.272717 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bq2qj"] Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.279677 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-bq2qj"] Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.525726 4754 scope.go:117] "RemoveContainer" containerID="109c4394d1987aa937840dd202ea7773008c95aca9332e896bee657bd553b7b0" Feb 18 19:37:48 crc kubenswrapper[4754]: E0218 19:37:48.574373 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.98:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 19:37:48 crc kubenswrapper[4754]: E0218 19:37:48.574433 4754 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.98:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 19:37:48 crc kubenswrapper[4754]: E0218 19:37:48.574609 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.98:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6djg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-wl8ql_openstack(d046b6fd-1000-4f80-af20-d756adbab2ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:37:48 crc kubenswrapper[4754]: E0218 19:37:48.575823 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-wl8ql" podUID="d046b6fd-1000-4f80-af20-d756adbab2ea" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.947452 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerStarted","Data":"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a"} Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.954162 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tkp99" event={"ID":"ef3a989f-f82e-4062-8b49-3f4cb7959b73","Type":"ContainerStarted","Data":"77b3ec840318ae3f7f6da44e6af936dbf6c3c77bf92761bfba1f39e794f8a3a2"} Feb 18 19:37:48 crc kubenswrapper[4754]: E0218 19:37:48.962732 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.98:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-wl8ql" podUID="d046b6fd-1000-4f80-af20-d756adbab2ea" Feb 18 19:37:48 crc kubenswrapper[4754]: I0218 19:37:48.979207 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tkp99" podStartSLOduration=7.629955686 podStartE2EDuration="27.979177069s" podCreationTimestamp="2026-02-18 19:37:21 +0000 UTC" firstStartedPulling="2026-02-18 19:37:28.176813515 +0000 UTC m=+1150.627226311" lastFinishedPulling="2026-02-18 19:37:48.526034888 +0000 UTC m=+1170.976447694" observedRunningTime="2026-02-18 19:37:48.973035609 +0000 UTC m=+1171.423448405" watchObservedRunningTime="2026-02-18 19:37:48.979177069 +0000 UTC m=+1171.429589865" Feb 18 19:37:49 crc kubenswrapper[4754]: I0218 19:37:49.982408 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f6s4l" event={"ID":"0db8affb-2742-46e4-a19d-a907e5c6d28d","Type":"ContainerStarted","Data":"bad3e277c32e6cce4459c94964e348ee6c0a455e24f875b0f6d39ac216f4b610"} Feb 18 19:37:50 crc kubenswrapper[4754]: I0218 19:37:50.022984 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-f6s4l" podStartSLOduration=2.270236494 podStartE2EDuration="41.022946638s" podCreationTimestamp="2026-02-18 19:37:09 +0000 UTC" firstStartedPulling="2026-02-18 19:37:10.086858461 +0000 UTC m=+1132.537271257" lastFinishedPulling="2026-02-18 19:37:48.839568605 +0000 UTC m=+1171.289981401" observedRunningTime="2026-02-18 19:37:50.008855052 +0000 UTC m=+1172.459267858" watchObservedRunningTime="2026-02-18 19:37:50.022946638 +0000 UTC m=+1172.473359424" Feb 18 19:37:50 crc kubenswrapper[4754]: I0218 19:37:50.226494 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" path="/var/lib/kubelet/pods/fba4ef7f-c55c-44e4-8213-6a900266eb2f/volumes" Feb 18 19:37:51 crc kubenswrapper[4754]: I0218 19:37:51.540037 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-bq2qj" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 18 19:37:53 crc kubenswrapper[4754]: I0218 19:37:53.027465 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerStarted","Data":"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d"} Feb 18 19:37:53 crc kubenswrapper[4754]: I0218 19:37:53.027950 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerStarted","Data":"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66"} Feb 18 19:37:53 crc kubenswrapper[4754]: I0218 19:37:53.072607 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.07258511 podStartE2EDuration="26.07258511s" podCreationTimestamp="2026-02-18 19:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:53.064585203 +0000 UTC m=+1175.514998009" watchObservedRunningTime="2026-02-18 19:37:53.07258511 +0000 UTC m=+1175.522997906" Feb 18 19:37:53 crc kubenswrapper[4754]: I0218 19:37:53.405807 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:57 crc kubenswrapper[4754]: I0218 19:37:57.070033 4754 generic.go:334] "Generic (PLEG): container finished" podID="ef3a989f-f82e-4062-8b49-3f4cb7959b73" containerID="77b3ec840318ae3f7f6da44e6af936dbf6c3c77bf92761bfba1f39e794f8a3a2" exitCode=0 Feb 18 19:37:57 crc kubenswrapper[4754]: I0218 19:37:57.070177 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tkp99" event={"ID":"ef3a989f-f82e-4062-8b49-3f4cb7959b73","Type":"ContainerDied","Data":"77b3ec840318ae3f7f6da44e6af936dbf6c3c77bf92761bfba1f39e794f8a3a2"} Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.406587 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.412327 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.507359 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.600817 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-config-data\") pod \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.600894 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfdrl\" (UniqueName: \"kubernetes.io/projected/ef3a989f-f82e-4062-8b49-3f4cb7959b73-kube-api-access-tfdrl\") pod \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.601021 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-combined-ca-bundle\") pod \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\" (UID: \"ef3a989f-f82e-4062-8b49-3f4cb7959b73\") " Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.607311 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef3a989f-f82e-4062-8b49-3f4cb7959b73-kube-api-access-tfdrl" (OuterVolumeSpecName: "kube-api-access-tfdrl") pod "ef3a989f-f82e-4062-8b49-3f4cb7959b73" (UID: "ef3a989f-f82e-4062-8b49-3f4cb7959b73"). InnerVolumeSpecName "kube-api-access-tfdrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.629565 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef3a989f-f82e-4062-8b49-3f4cb7959b73" (UID: "ef3a989f-f82e-4062-8b49-3f4cb7959b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.662102 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-config-data" (OuterVolumeSpecName: "config-data") pod "ef3a989f-f82e-4062-8b49-3f4cb7959b73" (UID: "ef3a989f-f82e-4062-8b49-3f4cb7959b73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.703752 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.703805 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfdrl\" (UniqueName: \"kubernetes.io/projected/ef3a989f-f82e-4062-8b49-3f4cb7959b73-kube-api-access-tfdrl\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4754]: I0218 19:37:58.703852 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef3a989f-f82e-4062-8b49-3f4cb7959b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.095311 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tkp99" event={"ID":"ef3a989f-f82e-4062-8b49-3f4cb7959b73","Type":"ContainerDied","Data":"9b3765b7261f3148e4b30ac4c1f59868950d565a74f22efa4bd65f3904844806"} Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.095909 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3765b7261f3148e4b30ac4c1f59868950d565a74f22efa4bd65f3904844806" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.095370 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tkp99" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.103089 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386352 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gt579"] Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386805 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="init" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386823 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="init" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386834 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386841 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386860 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62a77df-9678-4173-bd87-a3451220eb34" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386869 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62a77df-9678-4173-bd87-a3451220eb34" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386886 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce92b2-1e49-4847-a38e-7322e4089b05" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386892 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce92b2-1e49-4847-a38e-7322e4089b05" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386906 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa838e2c-4d5f-4820-ae94-27d460ee1664" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386912 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa838e2c-4d5f-4820-ae94-27d460ee1664" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386935 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="dnsmasq-dns" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386941 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="dnsmasq-dns" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386950 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de98594-db65-4550-bd48-dddb366bc4de" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386957 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de98594-db65-4550-bd48-dddb366bc4de" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386966 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386972 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: E0218 19:37:59.386981 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef3a989f-f82e-4062-8b49-3f4cb7959b73" containerName="keystone-db-sync" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.386987 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef3a989f-f82e-4062-8b49-3f4cb7959b73" containerName="keystone-db-sync" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387203 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce92b2-1e49-4847-a38e-7322e4089b05" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387236 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef3a989f-f82e-4062-8b49-3f4cb7959b73" containerName="keystone-db-sync" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387253 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387266 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba4ef7f-c55c-44e4-8213-6a900266eb2f" containerName="dnsmasq-dns" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387283 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387299 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de98594-db65-4550-bd48-dddb366bc4de" containerName="mariadb-database-create" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387318 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62a77df-9678-4173-bd87-a3451220eb34" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387328 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa838e2c-4d5f-4820-ae94-27d460ee1664" containerName="mariadb-account-create-update" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.387990 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.393589 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.393833 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.393949 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.399926 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gvkx6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.400228 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.407894 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nxl5f"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.409535 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.416682 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nxl5f"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.440385 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gt579"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517235 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517276 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517319 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqb68\" (UniqueName: \"kubernetes.io/projected/ad078397-2f05-490a-96a6-73cd685d28e4-kube-api-access-qqb68\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517344 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-config-data\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517377 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-fernet-keys\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517407 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-svc\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517467 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgdq\" (UniqueName: \"kubernetes.io/projected/3eed4606-91cf-47df-8019-d4c6b7da9ab4-kube-api-access-mtgdq\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517486 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-config\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517551 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-credential-keys\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517572 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-scripts\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517591 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.517616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-combined-ca-bundle\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.567842 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-888954555-c8j52"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.574499 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.579743 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.579891 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.582414 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.584520 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-rc8cd" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.626843 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.626893 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqb68\" (UniqueName: \"kubernetes.io/projected/ad078397-2f05-490a-96a6-73cd685d28e4-kube-api-access-qqb68\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.626919 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-config-data\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.626945 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-config-data\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.626967 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-fernet-keys\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.626993 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-svc\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627010 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-horizon-secret-key\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627028 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-logs\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627064 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgdq\" (UniqueName: \"kubernetes.io/projected/3eed4606-91cf-47df-8019-d4c6b7da9ab4-kube-api-access-mtgdq\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627083 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-config\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627101 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzkmb\" (UniqueName: \"kubernetes.io/projected/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-kube-api-access-rzkmb\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627180 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-scripts\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627221 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-credential-keys\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627248 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-scripts\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627270 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627294 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-combined-ca-bundle\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.627330 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.628244 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.628758 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.634529 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-config\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.634844 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-svc\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.636171 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.638510 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-config-data\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.640716 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-scripts\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.642685 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-fernet-keys\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.654888 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-combined-ca-bundle\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.651664 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-credential-keys\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.670807 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqb68\" (UniqueName: \"kubernetes.io/projected/ad078397-2f05-490a-96a6-73cd685d28e4-kube-api-access-qqb68\") pod \"dnsmasq-dns-5b868669f-nxl5f\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.673813 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgdq\" (UniqueName: \"kubernetes.io/projected/3eed4606-91cf-47df-8019-d4c6b7da9ab4-kube-api-access-mtgdq\") pod \"keystone-bootstrap-gt579\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.692439 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-888954555-c8j52"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.723406 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gt579" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.729264 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzkmb\" (UniqueName: \"kubernetes.io/projected/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-kube-api-access-rzkmb\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.729333 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-scripts\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.729426 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-config-data\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.729457 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-horizon-secret-key\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.729476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-logs\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.732132 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.732651 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-logs\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.732927 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-scripts\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.734562 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-config-data\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.744301 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-79xk6"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.746704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-horizon-secret-key\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.762714 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.776454 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4mfk7" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.776727 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzkmb\" (UniqueName: \"kubernetes.io/projected/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-kube-api-access-rzkmb\") pod \"horizon-888954555-c8j52\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.781585 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.781835 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.810290 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.829871 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.837568 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.837780 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.857627 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.910055 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-888954555-c8j52" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.915529 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-79xk6"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.946846 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-config-data\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.946921 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-db-sync-config-data\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.946949 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc061809-61de-4d52-909b-e2d4957dc4a4-etc-machine-id\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.946965 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-log-httpd\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.946986 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwmnw\" (UniqueName: \"kubernetes.io/projected/fc061809-61de-4d52-909b-e2d4957dc4a4-kube-api-access-vwmnw\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.947016 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.947034 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-config-data\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.947060 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-scripts\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.947088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrtv\" (UniqueName: \"kubernetes.io/projected/742e0717-1560-424d-b0d3-4e7b46f8ec8c-kube-api-access-bdrtv\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.973209 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-run-httpd\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.973368 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-combined-ca-bundle\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.973432 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-scripts\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.973466 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.973771 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kkj6c"] Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.975230 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.989440 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.989761 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fkl4d" Feb 18 19:37:59 crc kubenswrapper[4754]: I0218 19:37:59.989999 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.006020 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-b2mr5"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.012785 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.030045 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9gjw2" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.083740 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089345 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-combined-ca-bundle\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089411 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-scripts\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089548 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-config-data\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089579 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-db-sync-config-data\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089613 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc061809-61de-4d52-909b-e2d4957dc4a4-etc-machine-id\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089636 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-log-httpd\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089661 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwmnw\" (UniqueName: \"kubernetes.io/projected/fc061809-61de-4d52-909b-e2d4957dc4a4-kube-api-access-vwmnw\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089706 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089732 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-config-data\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089762 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-scripts\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089790 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdrtv\" (UniqueName: \"kubernetes.io/projected/742e0717-1560-424d-b0d3-4e7b46f8ec8c-kube-api-access-bdrtv\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.089827 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-run-httpd\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.094601 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-run-httpd\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.095602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc061809-61de-4d52-909b-e2d4957dc4a4-etc-machine-id\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.103176 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5d7g9"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.105662 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.114631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.115462 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.115707 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k8m6f" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.147427 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-config-data\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.148161 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-log-httpd\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.153936 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.155471 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-combined-ca-bundle\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.157092 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-scripts\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.168231 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.185409 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-config-data\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.186110 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-scripts\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.186571 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-db-sync-config-data\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.195101 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kkj6c"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.195172 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwmnw\" (UniqueName: \"kubernetes.io/projected/fc061809-61de-4d52-909b-e2d4957dc4a4-kube-api-access-vwmnw\") pod \"cinder-db-sync-79xk6\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.221587 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-combined-ca-bundle\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.221773 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-db-sync-config-data\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.221855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmt5x\" (UniqueName: \"kubernetes.io/projected/b1abaf62-0594-4378-bea6-b5dc29d52241-kube-api-access-wmt5x\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.221887 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-config\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.222045 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt2b5\" (UniqueName: \"kubernetes.io/projected/9a109a6c-ffaa-479e-95e6-ef033aec4b27-kube-api-access-lt2b5\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.222097 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-combined-ca-bundle\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.229716 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79xk6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.244459 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdrtv\" (UniqueName: \"kubernetes.io/projected/742e0717-1560-424d-b0d3-4e7b46f8ec8c-kube-api-access-bdrtv\") pod \"ceilometer-0\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.299116 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.304684 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5d7g9"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.306072 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-b2mr5"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.323812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-combined-ca-bundle\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.323952 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kb7v\" (UniqueName: \"kubernetes.io/projected/5747d187-87f8-4baa-b0aa-65916db69601-kube-api-access-8kb7v\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.323979 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-db-sync-config-data\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.324014 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmt5x\" (UniqueName: \"kubernetes.io/projected/b1abaf62-0594-4378-bea6-b5dc29d52241-kube-api-access-wmt5x\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.324031 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-config\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.324083 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-combined-ca-bundle\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.324105 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt2b5\" (UniqueName: \"kubernetes.io/projected/9a109a6c-ffaa-479e-95e6-ef033aec4b27-kube-api-access-lt2b5\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.324124 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5747d187-87f8-4baa-b0aa-65916db69601-logs\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.331575 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-combined-ca-bundle\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.331713 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-config-data\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.331748 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-scripts\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.335458 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-combined-ca-bundle\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.350183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-db-sync-config-data\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.355662 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-combined-ca-bundle\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.356608 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-config\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.358690 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt2b5\" (UniqueName: \"kubernetes.io/projected/9a109a6c-ffaa-479e-95e6-ef033aec4b27-kube-api-access-lt2b5\") pod \"barbican-db-sync-b2mr5\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.374333 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nxl5f"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.389002 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmt5x\" (UniqueName: \"kubernetes.io/projected/b1abaf62-0594-4378-bea6-b5dc29d52241-kube-api-access-wmt5x\") pod \"neutron-db-sync-kkj6c\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.434057 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-config-data\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.435840 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-scripts\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.436086 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kb7v\" (UniqueName: \"kubernetes.io/projected/5747d187-87f8-4baa-b0aa-65916db69601-kube-api-access-8kb7v\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.436451 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-combined-ca-bundle\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.436690 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5747d187-87f8-4baa-b0aa-65916db69601-logs\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.437528 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5747d187-87f8-4baa-b0aa-65916db69601-logs\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.442180 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-scripts\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.442790 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-combined-ca-bundle\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.445617 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69df465b89-p9cqb"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.449217 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.452666 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.457281 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69df465b89-p9cqb"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.462602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-config-data\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.463371 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kb7v\" (UniqueName: \"kubernetes.io/projected/5747d187-87f8-4baa-b0aa-65916db69601-kube-api-access-8kb7v\") pod \"placement-db-sync-5d7g9\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.475491 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.488127 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-gr9m6"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.490606 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.514601 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-gr9m6"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.543687 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-config\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544002 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544028 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c9800a6-c17a-482a-8f95-134b2df4afba-horizon-secret-key\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544073 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544213 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-config-data\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544284 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544440 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9zv\" (UniqueName: \"kubernetes.io/projected/15826bdf-7267-4160-b0b8-f4eb3b76eae0-kube-api-access-mb9zv\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544504 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9800a6-c17a-482a-8f95-134b2df4afba-logs\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544534 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfl2f\" (UniqueName: \"kubernetes.io/projected/0c9800a6-c17a-482a-8f95-134b2df4afba-kube-api-access-mfl2f\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544575 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-scripts\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.544616 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-svc\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646245 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-config\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646295 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646322 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c9800a6-c17a-482a-8f95-134b2df4afba-horizon-secret-key\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646368 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646395 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-config-data\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646414 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646458 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9zv\" (UniqueName: \"kubernetes.io/projected/15826bdf-7267-4160-b0b8-f4eb3b76eae0-kube-api-access-mb9zv\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9800a6-c17a-482a-8f95-134b2df4afba-logs\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646503 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfl2f\" (UniqueName: \"kubernetes.io/projected/0c9800a6-c17a-482a-8f95-134b2df4afba-kube-api-access-mfl2f\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646521 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-scripts\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.646738 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-svc\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.647953 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-svc\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.648938 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-config\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.649168 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.649296 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.649318 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9800a6-c17a-482a-8f95-134b2df4afba-logs\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.650312 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.653424 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-scripts\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.653973 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-config-data\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.670730 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c9800a6-c17a-482a-8f95-134b2df4afba-horizon-secret-key\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.680008 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nxl5f"] Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.683324 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.706493 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfl2f\" (UniqueName: \"kubernetes.io/projected/0c9800a6-c17a-482a-8f95-134b2df4afba-kube-api-access-mfl2f\") pod \"horizon-69df465b89-p9cqb\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.707252 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9zv\" (UniqueName: \"kubernetes.io/projected/15826bdf-7267-4160-b0b8-f4eb3b76eae0-kube-api-access-mb9zv\") pod \"dnsmasq-dns-cf78879c9-gr9m6\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:00 crc kubenswrapper[4754]: W0218 19:38:00.760573 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad078397_2f05_490a_96a6_73cd685d28e4.slice/crio-efa63182124bc4d23b3c175cf876a93e2dd8719bf743f295ed13640204687864 WatchSource:0}: Error finding container efa63182124bc4d23b3c175cf876a93e2dd8719bf743f295ed13640204687864: Status 404 returned error can't find the container with id efa63182124bc4d23b3c175cf876a93e2dd8719bf743f295ed13640204687864 Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.791564 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:00 crc kubenswrapper[4754]: I0218 19:38:00.835864 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.016169 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gt579"] Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.233841 4754 generic.go:334] "Generic (PLEG): container finished" podID="0db8affb-2742-46e4-a19d-a907e5c6d28d" containerID="bad3e277c32e6cce4459c94964e348ee6c0a455e24f875b0f6d39ac216f4b610" exitCode=0 Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.234378 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f6s4l" event={"ID":"0db8affb-2742-46e4-a19d-a907e5c6d28d","Type":"ContainerDied","Data":"bad3e277c32e6cce4459c94964e348ee6c0a455e24f875b0f6d39ac216f4b610"} Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.237127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" event={"ID":"ad078397-2f05-490a-96a6-73cd685d28e4","Type":"ContainerStarted","Data":"efa63182124bc4d23b3c175cf876a93e2dd8719bf743f295ed13640204687864"} Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.238285 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gt579" event={"ID":"3eed4606-91cf-47df-8019-d4c6b7da9ab4","Type":"ContainerStarted","Data":"59ec01112d803cd94787da8f84c9c9c0a3915b70e94ff78a01c4a438886efc2c"} Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.274470 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-79xk6"] Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.299458 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-888954555-c8j52"] Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.319731 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-b2mr5"] Feb 18 19:38:01 crc kubenswrapper[4754]: W0218 19:38:01.332918 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a109a6c_ffaa_479e_95e6_ef033aec4b27.slice/crio-e00e789de12be741424b73fb6edd0ea814dbd606d9bd18f3ba7a2c975c21bc58 WatchSource:0}: Error finding container e00e789de12be741424b73fb6edd0ea814dbd606d9bd18f3ba7a2c975c21bc58: Status 404 returned error can't find the container with id e00e789de12be741424b73fb6edd0ea814dbd606d9bd18f3ba7a2c975c21bc58 Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.381390 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:38:01 crc kubenswrapper[4754]: W0218 19:38:01.412094 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod742e0717_1560_424d_b0d3_4e7b46f8ec8c.slice/crio-3cc87e443e04924f45dd38ce82d4dbd762fe70942a83e280aaddf9e17de56785 WatchSource:0}: Error finding container 3cc87e443e04924f45dd38ce82d4dbd762fe70942a83e280aaddf9e17de56785: Status 404 returned error can't find the container with id 3cc87e443e04924f45dd38ce82d4dbd762fe70942a83e280aaddf9e17de56785 Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.451474 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5d7g9"] Feb 18 19:38:01 crc kubenswrapper[4754]: W0218 19:38:01.458283 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5747d187_87f8_4baa_b0aa_65916db69601.slice/crio-b338c1557451514520d2e9dd6b06236abf1d32e474d2642c371069b6ec11cb00 WatchSource:0}: Error finding container b338c1557451514520d2e9dd6b06236abf1d32e474d2642c371069b6ec11cb00: Status 404 returned error can't find the container with id b338c1557451514520d2e9dd6b06236abf1d32e474d2642c371069b6ec11cb00 Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.592122 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kkj6c"] Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.628656 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-gr9m6"] Feb 18 19:38:01 crc kubenswrapper[4754]: I0218 19:38:01.654554 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69df465b89-p9cqb"] Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.323167 4754 generic.go:334] "Generic (PLEG): container finished" podID="ad078397-2f05-490a-96a6-73cd685d28e4" containerID="018cc914755c201af38bde4b03e7ed1fe3141214b65a14a82e81b667c34a7994" exitCode=0 Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.323660 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" event={"ID":"ad078397-2f05-490a-96a6-73cd685d28e4","Type":"ContainerDied","Data":"018cc914755c201af38bde4b03e7ed1fe3141214b65a14a82e81b667c34a7994"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.378448 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kkj6c" event={"ID":"b1abaf62-0594-4378-bea6-b5dc29d52241","Type":"ContainerStarted","Data":"7b975af1f6f66177a58d1deec670fbf1439aadfbcb2e00637d32275d7dd3dd0b"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.378506 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kkj6c" event={"ID":"b1abaf62-0594-4378-bea6-b5dc29d52241","Type":"ContainerStarted","Data":"4fccfae1439010a409d42616b972226c9e83aa57e2a945d3fe8a229792088aa9"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.385851 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b2mr5" event={"ID":"9a109a6c-ffaa-479e-95e6-ef033aec4b27","Type":"ContainerStarted","Data":"e00e789de12be741424b73fb6edd0ea814dbd606d9bd18f3ba7a2c975c21bc58"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.394049 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gt579" event={"ID":"3eed4606-91cf-47df-8019-d4c6b7da9ab4","Type":"ContainerStarted","Data":"6373cbb734fe7f180fca59d2f0a4db6d337503aad8a4b3efddb13cd3da0f8e05"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.397302 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df465b89-p9cqb" event={"ID":"0c9800a6-c17a-482a-8f95-134b2df4afba","Type":"ContainerStarted","Data":"3f2f6843c3ceadd8976f8f4657f4e6923a2f54e4df2405e46aa7cc7d7e9baf9c"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.406385 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-888954555-c8j52" event={"ID":"78669beb-cdbe-41e0-8897-3bcf16dc9bdb","Type":"ContainerStarted","Data":"be3930fb5efc913a2038d7acbb68db2a6c68fa9e040d751f5da66d509d8807cd"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.416578 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kkj6c" podStartSLOduration=3.416550876 podStartE2EDuration="3.416550876s" podCreationTimestamp="2026-02-18 19:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:02.406934428 +0000 UTC m=+1184.857347224" watchObservedRunningTime="2026-02-18 19:38:02.416550876 +0000 UTC m=+1184.866963672" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.420529 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69df465b89-p9cqb"] Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.424298 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"742e0717-1560-424d-b0d3-4e7b46f8ec8c","Type":"ContainerStarted","Data":"3cc87e443e04924f45dd38ce82d4dbd762fe70942a83e280aaddf9e17de56785"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.473457 4754 generic.go:334] "Generic (PLEG): container finished" podID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerID="e43b31700a72c13d87c36254c4188d3a341c970e897aa44690219eb0d312ce86" exitCode=0 Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.473567 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" event={"ID":"15826bdf-7267-4160-b0b8-f4eb3b76eae0","Type":"ContainerDied","Data":"e43b31700a72c13d87c36254c4188d3a341c970e897aa44690219eb0d312ce86"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.473598 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" event={"ID":"15826bdf-7267-4160-b0b8-f4eb3b76eae0","Type":"ContainerStarted","Data":"c7f2377c2c4811d43f135490f988a8e8ab2d4966e83205b4cdf52de9c41e2265"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.485043 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-946b65d6f-f4rwt"] Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.488195 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.489090 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79xk6" event={"ID":"fc061809-61de-4d52-909b-e2d4957dc4a4","Type":"ContainerStarted","Data":"b0ce13457280e82b4c7cbb09e22a8cf9d4df5b01d368cd39de5bee6421babd96"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.491643 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5d7g9" event={"ID":"5747d187-87f8-4baa-b0aa-65916db69601","Type":"ContainerStarted","Data":"b338c1557451514520d2e9dd6b06236abf1d32e474d2642c371069b6ec11cb00"} Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.500786 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gt579" podStartSLOduration=3.500752168 podStartE2EDuration="3.500752168s" podCreationTimestamp="2026-02-18 19:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:02.447559904 +0000 UTC m=+1184.897972700" watchObservedRunningTime="2026-02-18 19:38:02.500752168 +0000 UTC m=+1184.951164964" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.581858 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.587029 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-946b65d6f-f4rwt"] Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.619630 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-horizon-secret-key\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.619685 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-config-data\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.619713 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxg9\" (UniqueName: \"kubernetes.io/projected/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-kube-api-access-rlxg9\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.619917 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-logs\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.619959 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-scripts\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.723124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-scripts\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.724688 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-scripts\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.723325 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-horizon-secret-key\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.724793 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-config-data\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.724820 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxg9\" (UniqueName: \"kubernetes.io/projected/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-kube-api-access-rlxg9\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.724929 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-logs\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.726704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-config-data\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.727573 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-logs\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.734384 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-horizon-secret-key\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.760378 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxg9\" (UniqueName: \"kubernetes.io/projected/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-kube-api-access-rlxg9\") pod \"horizon-946b65d6f-f4rwt\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.900835 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:38:02 crc kubenswrapper[4754]: I0218 19:38:02.921792 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.047106 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-nb\") pod \"ad078397-2f05-490a-96a6-73cd685d28e4\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.047770 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqb68\" (UniqueName: \"kubernetes.io/projected/ad078397-2f05-490a-96a6-73cd685d28e4-kube-api-access-qqb68\") pod \"ad078397-2f05-490a-96a6-73cd685d28e4\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.047905 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-svc\") pod \"ad078397-2f05-490a-96a6-73cd685d28e4\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.047946 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-swift-storage-0\") pod \"ad078397-2f05-490a-96a6-73cd685d28e4\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.048135 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-config\") pod \"ad078397-2f05-490a-96a6-73cd685d28e4\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.048247 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-sb\") pod \"ad078397-2f05-490a-96a6-73cd685d28e4\" (UID: \"ad078397-2f05-490a-96a6-73cd685d28e4\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.052245 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad078397-2f05-490a-96a6-73cd685d28e4-kube-api-access-qqb68" (OuterVolumeSpecName: "kube-api-access-qqb68") pod "ad078397-2f05-490a-96a6-73cd685d28e4" (UID: "ad078397-2f05-490a-96a6-73cd685d28e4"). InnerVolumeSpecName "kube-api-access-qqb68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.073068 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ad078397-2f05-490a-96a6-73cd685d28e4" (UID: "ad078397-2f05-490a-96a6-73cd685d28e4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.079220 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad078397-2f05-490a-96a6-73cd685d28e4" (UID: "ad078397-2f05-490a-96a6-73cd685d28e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.083206 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad078397-2f05-490a-96a6-73cd685d28e4" (UID: "ad078397-2f05-490a-96a6-73cd685d28e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.093660 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-config" (OuterVolumeSpecName: "config") pod "ad078397-2f05-490a-96a6-73cd685d28e4" (UID: "ad078397-2f05-490a-96a6-73cd685d28e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.097003 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad078397-2f05-490a-96a6-73cd685d28e4" (UID: "ad078397-2f05-490a-96a6-73cd685d28e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.150816 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.150853 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqb68\" (UniqueName: \"kubernetes.io/projected/ad078397-2f05-490a-96a6-73cd685d28e4-kube-api-access-qqb68\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.150867 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.150877 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.150885 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.150894 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad078397-2f05-490a-96a6-73cd685d28e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.509793 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-946b65d6f-f4rwt"] Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.525712 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" event={"ID":"ad078397-2f05-490a-96a6-73cd685d28e4","Type":"ContainerDied","Data":"efa63182124bc4d23b3c175cf876a93e2dd8719bf743f295ed13640204687864"} Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.525732 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nxl5f" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.525780 4754 scope.go:117] "RemoveContainer" containerID="018cc914755c201af38bde4b03e7ed1fe3141214b65a14a82e81b667c34a7994" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.533345 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" event={"ID":"15826bdf-7267-4160-b0b8-f4eb3b76eae0","Type":"ContainerStarted","Data":"b644ec0f34c805a147d0aaac4270a43d0a8c6e6b4e5e8c04ea5f090ec8694820"} Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.535027 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:03 crc kubenswrapper[4754]: W0218 19:38:03.545435 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf46e1cd8_a675_4ee6_a25c_025c9a00c7f0.slice/crio-275f3d79255189e9eb26d8aca3fbbe86bc9ef5f69d1eec66146bb2c797e16e17 WatchSource:0}: Error finding container 275f3d79255189e9eb26d8aca3fbbe86bc9ef5f69d1eec66146bb2c797e16e17: Status 404 returned error can't find the container with id 275f3d79255189e9eb26d8aca3fbbe86bc9ef5f69d1eec66146bb2c797e16e17 Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.582595 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" podStartSLOduration=3.5825734909999998 podStartE2EDuration="3.582573491s" podCreationTimestamp="2026-02-18 19:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:03.564217485 +0000 UTC m=+1186.014630291" watchObservedRunningTime="2026-02-18 19:38:03.582573491 +0000 UTC m=+1186.032986287" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.586886 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f6s4l" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.661630 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nxl5f"] Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.666322 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-config-data\") pod \"0db8affb-2742-46e4-a19d-a907e5c6d28d\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.666431 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k86b\" (UniqueName: \"kubernetes.io/projected/0db8affb-2742-46e4-a19d-a907e5c6d28d-kube-api-access-9k86b\") pod \"0db8affb-2742-46e4-a19d-a907e5c6d28d\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.666532 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-db-sync-config-data\") pod \"0db8affb-2742-46e4-a19d-a907e5c6d28d\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.666630 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-combined-ca-bundle\") pod \"0db8affb-2742-46e4-a19d-a907e5c6d28d\" (UID: \"0db8affb-2742-46e4-a19d-a907e5c6d28d\") " Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.675803 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db8affb-2742-46e4-a19d-a907e5c6d28d-kube-api-access-9k86b" (OuterVolumeSpecName: "kube-api-access-9k86b") pod "0db8affb-2742-46e4-a19d-a907e5c6d28d" (UID: "0db8affb-2742-46e4-a19d-a907e5c6d28d"). InnerVolumeSpecName "kube-api-access-9k86b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.678920 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0db8affb-2742-46e4-a19d-a907e5c6d28d" (UID: "0db8affb-2742-46e4-a19d-a907e5c6d28d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.689520 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nxl5f"] Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.738813 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0db8affb-2742-46e4-a19d-a907e5c6d28d" (UID: "0db8affb-2742-46e4-a19d-a907e5c6d28d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.772513 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.772554 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k86b\" (UniqueName: \"kubernetes.io/projected/0db8affb-2742-46e4-a19d-a907e5c6d28d-kube-api-access-9k86b\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.772568 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.801937 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-config-data" (OuterVolumeSpecName: "config-data") pod "0db8affb-2742-46e4-a19d-a907e5c6d28d" (UID: "0db8affb-2742-46e4-a19d-a907e5c6d28d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4754]: I0218 19:38:03.875843 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db8affb-2742-46e4-a19d-a907e5c6d28d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:04 crc kubenswrapper[4754]: I0218 19:38:04.230585 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad078397-2f05-490a-96a6-73cd685d28e4" path="/var/lib/kubelet/pods/ad078397-2f05-490a-96a6-73cd685d28e4/volumes" Feb 18 19:38:04 crc kubenswrapper[4754]: I0218 19:38:04.565554 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f6s4l" event={"ID":"0db8affb-2742-46e4-a19d-a907e5c6d28d","Type":"ContainerDied","Data":"08dba6bf1d8cbc47a6ca5ef10acf781ebc317295d12631cba369c1e1b242f2f3"} Feb 18 19:38:04 crc kubenswrapper[4754]: I0218 19:38:04.565614 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08dba6bf1d8cbc47a6ca5ef10acf781ebc317295d12631cba369c1e1b242f2f3" Feb 18 19:38:04 crc kubenswrapper[4754]: I0218 19:38:04.565609 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f6s4l" Feb 18 19:38:04 crc kubenswrapper[4754]: I0218 19:38:04.592273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-946b65d6f-f4rwt" event={"ID":"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0","Type":"ContainerStarted","Data":"275f3d79255189e9eb26d8aca3fbbe86bc9ef5f69d1eec66146bb2c797e16e17"} Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.083200 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-gr9m6"] Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.156167 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p8ttj"] Feb 18 19:38:05 crc kubenswrapper[4754]: E0218 19:38:05.156601 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad078397-2f05-490a-96a6-73cd685d28e4" containerName="init" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.156616 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad078397-2f05-490a-96a6-73cd685d28e4" containerName="init" Feb 18 19:38:05 crc kubenswrapper[4754]: E0218 19:38:05.156639 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db8affb-2742-46e4-a19d-a907e5c6d28d" containerName="glance-db-sync" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.156645 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db8affb-2742-46e4-a19d-a907e5c6d28d" containerName="glance-db-sync" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.156830 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad078397-2f05-490a-96a6-73cd685d28e4" containerName="init" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.156851 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db8affb-2742-46e4-a19d-a907e5c6d28d" containerName="glance-db-sync" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.169584 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.174682 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p8ttj"] Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.319230 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.319344 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.319393 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.319413 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-config\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.319459 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.319497 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbg6c\" (UniqueName: \"kubernetes.io/projected/b7e2beb9-4b67-4852-9cd8-11ac78684181-kube-api-access-zbg6c\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.421337 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.421407 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.421435 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-config\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.421476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.421514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbg6c\" (UniqueName: \"kubernetes.io/projected/b7e2beb9-4b67-4852-9cd8-11ac78684181-kube-api-access-zbg6c\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.421534 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.422473 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.422991 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.423395 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.423555 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-config\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.424156 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.443918 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbg6c\" (UniqueName: \"kubernetes.io/projected/b7e2beb9-4b67-4852-9cd8-11ac78684181-kube-api-access-zbg6c\") pod \"dnsmasq-dns-56df8fb6b7-p8ttj\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.498471 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.619918 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" containerID="cri-o://b644ec0f34c805a147d0aaac4270a43d0a8c6e6b4e5e8c04ea5f090ec8694820" gracePeriod=10 Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.620124 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wl8ql" event={"ID":"d046b6fd-1000-4f80-af20-d756adbab2ea","Type":"ContainerStarted","Data":"6f854c5a231e9496f68db3f3102285b6259db1e80c055dccb160908751e3a010"} Feb 18 19:38:05 crc kubenswrapper[4754]: I0218 19:38:05.652853 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-wl8ql" podStartSLOduration=8.531373936 podStartE2EDuration="44.652830175s" podCreationTimestamp="2026-02-18 19:37:21 +0000 UTC" firstStartedPulling="2026-02-18 19:37:28.186197385 +0000 UTC m=+1150.636610191" lastFinishedPulling="2026-02-18 19:38:04.307653634 +0000 UTC m=+1186.758066430" observedRunningTime="2026-02-18 19:38:05.645927242 +0000 UTC m=+1188.096340038" watchObservedRunningTime="2026-02-18 19:38:05.652830175 +0000 UTC m=+1188.103242971" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.090361 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p8ttj"] Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.128980 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.130806 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.135879 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.136118 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ld58k" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.136203 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.198273 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248384 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248448 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2jp\" (UniqueName: \"kubernetes.io/projected/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-kube-api-access-5r2jp\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248477 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248511 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-config-data\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248549 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-scripts\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248614 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.248700 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-logs\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.350984 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-config-data\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351072 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-scripts\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351120 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351279 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-logs\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351328 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2jp\" (UniqueName: \"kubernetes.io/projected/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-kube-api-access-5r2jp\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351392 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.351999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.356757 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-logs\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.366167 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.370469 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-scripts\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.414182 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-config-data\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.418342 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.424808 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.429332 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.443494 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2jp\" (UniqueName: \"kubernetes.io/projected/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-kube-api-access-5r2jp\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.445986 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.457669 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.457735 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.457795 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.457830 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.457892 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.457986 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-logs\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.458032 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzw2\" (UniqueName: \"kubernetes.io/projected/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-kube-api-access-whzw2\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.516230 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.562806 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.562905 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.562997 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-logs\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.563038 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzw2\" (UniqueName: \"kubernetes.io/projected/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-kube-api-access-whzw2\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.563091 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.563111 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.563168 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.567641 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.567970 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.568347 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.577084 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-logs\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.580125 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.588226 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.594682 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzw2\" (UniqueName: \"kubernetes.io/projected/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-kube-api-access-whzw2\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.596483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.620083 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.653188 4754 generic.go:334] "Generic (PLEG): container finished" podID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerID="b644ec0f34c805a147d0aaac4270a43d0a8c6e6b4e5e8c04ea5f090ec8694820" exitCode=0 Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.653275 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" event={"ID":"15826bdf-7267-4160-b0b8-f4eb3b76eae0","Type":"ContainerDied","Data":"b644ec0f34c805a147d0aaac4270a43d0a8c6e6b4e5e8c04ea5f090ec8694820"} Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.784849 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:38:06 crc kubenswrapper[4754]: I0218 19:38:06.840977 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:07 crc kubenswrapper[4754]: I0218 19:38:07.670214 4754 generic.go:334] "Generic (PLEG): container finished" podID="3eed4606-91cf-47df-8019-d4c6b7da9ab4" containerID="6373cbb734fe7f180fca59d2f0a4db6d337503aad8a4b3efddb13cd3da0f8e05" exitCode=0 Feb 18 19:38:07 crc kubenswrapper[4754]: I0218 19:38:07.670273 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gt579" event={"ID":"3eed4606-91cf-47df-8019-d4c6b7da9ab4","Type":"ContainerDied","Data":"6373cbb734fe7f180fca59d2f0a4db6d337503aad8a4b3efddb13cd3da0f8e05"} Feb 18 19:38:09 crc kubenswrapper[4754]: W0218 19:38:09.430847 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7e2beb9_4b67_4852_9cd8_11ac78684181.slice/crio-ddcc3034bed9db28bc0c64a9448d5f909951ba4ddf9423ec066223d31802765b WatchSource:0}: Error finding container ddcc3034bed9db28bc0c64a9448d5f909951ba4ddf9423ec066223d31802765b: Status 404 returned error can't find the container with id ddcc3034bed9db28bc0c64a9448d5f909951ba4ddf9423ec066223d31802765b Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.537657 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gt579" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.631011 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-scripts\") pod \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.631152 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-combined-ca-bundle\") pod \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.631206 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgdq\" (UniqueName: \"kubernetes.io/projected/3eed4606-91cf-47df-8019-d4c6b7da9ab4-kube-api-access-mtgdq\") pod \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.631241 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-config-data\") pod \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.631279 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-fernet-keys\") pod \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.631352 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-credential-keys\") pod \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\" (UID: \"3eed4606-91cf-47df-8019-d4c6b7da9ab4\") " Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.642376 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-scripts" (OuterVolumeSpecName: "scripts") pod "3eed4606-91cf-47df-8019-d4c6b7da9ab4" (UID: "3eed4606-91cf-47df-8019-d4c6b7da9ab4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.643096 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3eed4606-91cf-47df-8019-d4c6b7da9ab4" (UID: "3eed4606-91cf-47df-8019-d4c6b7da9ab4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.652447 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eed4606-91cf-47df-8019-d4c6b7da9ab4-kube-api-access-mtgdq" (OuterVolumeSpecName: "kube-api-access-mtgdq") pod "3eed4606-91cf-47df-8019-d4c6b7da9ab4" (UID: "3eed4606-91cf-47df-8019-d4c6b7da9ab4"). InnerVolumeSpecName "kube-api-access-mtgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.677335 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-config-data" (OuterVolumeSpecName: "config-data") pod "3eed4606-91cf-47df-8019-d4c6b7da9ab4" (UID: "3eed4606-91cf-47df-8019-d4c6b7da9ab4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.686190 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3eed4606-91cf-47df-8019-d4c6b7da9ab4" (UID: "3eed4606-91cf-47df-8019-d4c6b7da9ab4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.692803 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eed4606-91cf-47df-8019-d4c6b7da9ab4" (UID: "3eed4606-91cf-47df-8019-d4c6b7da9ab4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.738771 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtgdq\" (UniqueName: \"kubernetes.io/projected/3eed4606-91cf-47df-8019-d4c6b7da9ab4-kube-api-access-mtgdq\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.738819 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.738830 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.738843 4754 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.738852 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.738861 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eed4606-91cf-47df-8019-d4c6b7da9ab4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.743204 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" event={"ID":"b7e2beb9-4b67-4852-9cd8-11ac78684181","Type":"ContainerStarted","Data":"ddcc3034bed9db28bc0c64a9448d5f909951ba4ddf9423ec066223d31802765b"} Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.751721 4754 generic.go:334] "Generic (PLEG): container finished" podID="d046b6fd-1000-4f80-af20-d756adbab2ea" containerID="6f854c5a231e9496f68db3f3102285b6259db1e80c055dccb160908751e3a010" exitCode=0 Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.751844 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wl8ql" event={"ID":"d046b6fd-1000-4f80-af20-d756adbab2ea","Type":"ContainerDied","Data":"6f854c5a231e9496f68db3f3102285b6259db1e80c055dccb160908751e3a010"} Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.764337 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gt579" event={"ID":"3eed4606-91cf-47df-8019-d4c6b7da9ab4","Type":"ContainerDied","Data":"59ec01112d803cd94787da8f84c9c9c0a3915b70e94ff78a01c4a438886efc2c"} Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.764383 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ec01112d803cd94787da8f84c9c9c0a3915b70e94ff78a01c4a438886efc2c" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.764435 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gt579" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.799544 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gt579"] Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.809364 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gt579"] Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.876462 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kd677"] Feb 18 19:38:09 crc kubenswrapper[4754]: E0218 19:38:09.877006 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eed4606-91cf-47df-8019-d4c6b7da9ab4" containerName="keystone-bootstrap" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.877026 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eed4606-91cf-47df-8019-d4c6b7da9ab4" containerName="keystone-bootstrap" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.877320 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eed4606-91cf-47df-8019-d4c6b7da9ab4" containerName="keystone-bootstrap" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.878441 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.881461 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gvkx6" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.881555 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.881622 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.881818 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.884356 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:38:09 crc kubenswrapper[4754]: I0218 19:38:09.894910 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kd677"] Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.044803 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-scripts\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.045388 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8f9k\" (UniqueName: \"kubernetes.io/projected/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-kube-api-access-j8f9k\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.045508 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-credential-keys\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.045615 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-combined-ca-bundle\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.045714 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-config-data\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.045785 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-fernet-keys\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.147944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8f9k\" (UniqueName: \"kubernetes.io/projected/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-kube-api-access-j8f9k\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.148014 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-credential-keys\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.148043 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-combined-ca-bundle\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.148073 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-config-data\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.148101 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-fernet-keys\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.148185 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-scripts\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.152809 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-credential-keys\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.153710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-fernet-keys\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.154030 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-scripts\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.155935 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-config-data\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.156232 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-combined-ca-bundle\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.170934 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8f9k\" (UniqueName: \"kubernetes.io/projected/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-kube-api-access-j8f9k\") pod \"keystone-bootstrap-kd677\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.210014 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:10 crc kubenswrapper[4754]: I0218 19:38:10.233438 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eed4606-91cf-47df-8019-d4c6b7da9ab4" path="/var/lib/kubelet/pods/3eed4606-91cf-47df-8019-d4c6b7da9ab4/volumes" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.253461 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.341289 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.632729 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-888954555-c8j52"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.662092 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9577ccdb8-nfcx9"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.669823 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.675213 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.703905 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9577ccdb8-nfcx9"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.766353 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-946b65d6f-f4rwt"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.807496 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f7766589b-gh94d"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809294 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-combined-ca-bundle\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809390 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-scripts\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809440 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afcabe6-a035-4ecd-8522-93afd1691f25-logs\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809475 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmv9\" (UniqueName: \"kubernetes.io/projected/8afcabe6-a035-4ecd-8522-93afd1691f25-kube-api-access-xgmv9\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809525 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-tls-certs\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809549 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-secret-key\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.809590 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-config-data\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.811467 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.826963 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f7766589b-gh94d"] Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912026 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-horizon-tls-certs\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912122 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-config-data\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912210 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-horizon-secret-key\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912267 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2gc5\" (UniqueName: \"kubernetes.io/projected/c99f043f-84fb-4825-8ba7-c918263e6c7f-kube-api-access-n2gc5\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912293 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99f043f-84fb-4825-8ba7-c918263e6c7f-logs\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912323 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-combined-ca-bundle\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912370 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c99f043f-84fb-4825-8ba7-c918263e6c7f-config-data\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912397 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-scripts\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afcabe6-a035-4ecd-8522-93afd1691f25-logs\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912477 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgmv9\" (UniqueName: \"kubernetes.io/projected/8afcabe6-a035-4ecd-8522-93afd1691f25-kube-api-access-xgmv9\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912535 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-combined-ca-bundle\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912570 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-tls-certs\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912606 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-secret-key\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.912643 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c99f043f-84fb-4825-8ba7-c918263e6c7f-scripts\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.915642 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-scripts\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.915921 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afcabe6-a035-4ecd-8522-93afd1691f25-logs\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.916920 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-config-data\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.923953 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-combined-ca-bundle\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.924344 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-secret-key\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.924752 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-tls-certs\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:12 crc kubenswrapper[4754]: I0218 19:38:12.936312 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgmv9\" (UniqueName: \"kubernetes.io/projected/8afcabe6-a035-4ecd-8522-93afd1691f25-kube-api-access-xgmv9\") pod \"horizon-9577ccdb8-nfcx9\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.014600 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.014931 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-combined-ca-bundle\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.015953 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c99f043f-84fb-4825-8ba7-c918263e6c7f-scripts\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.016037 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-horizon-tls-certs\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.016230 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-horizon-secret-key\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.016323 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2gc5\" (UniqueName: \"kubernetes.io/projected/c99f043f-84fb-4825-8ba7-c918263e6c7f-kube-api-access-n2gc5\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.016360 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99f043f-84fb-4825-8ba7-c918263e6c7f-logs\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.016434 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c99f043f-84fb-4825-8ba7-c918263e6c7f-config-data\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.018153 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c99f043f-84fb-4825-8ba7-c918263e6c7f-config-data\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.018474 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99f043f-84fb-4825-8ba7-c918263e6c7f-logs\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.018875 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c99f043f-84fb-4825-8ba7-c918263e6c7f-scripts\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.019607 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-combined-ca-bundle\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.033704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-horizon-secret-key\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.033917 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99f043f-84fb-4825-8ba7-c918263e6c7f-horizon-tls-certs\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.038282 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2gc5\" (UniqueName: \"kubernetes.io/projected/c99f043f-84fb-4825-8ba7-c918263e6c7f-kube-api-access-n2gc5\") pod \"horizon-5f7766589b-gh94d\" (UID: \"c99f043f-84fb-4825-8ba7-c918263e6c7f\") " pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:13 crc kubenswrapper[4754]: I0218 19:38:13.147784 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:15 crc kubenswrapper[4754]: I0218 19:38:15.839042 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 18 19:38:16 crc kubenswrapper[4754]: E0218 19:38:16.538059 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 18 19:38:16 crc kubenswrapper[4754]: E0218 19:38:16.538906 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh58dhf5h58fh56ch64ch575h5b7h694h559h56bh677h58bh68fhb6h8ch5d8h9h689h5c8h56h6dh597h56fh5bh679h7dh95h589h648hd9hd9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(742e0717-1560-424d-b0d3-4e7b46f8ec8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:38:20 crc kubenswrapper[4754]: I0218 19:38:20.841240 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.178788 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.179582 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt2b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-b2mr5_openstack(9a109a6c-ffaa-479e-95e6-ef033aec4b27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.180900 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-b2mr5" podUID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.190442 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.190709 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc9h8bh679h5f8hf6h7dh597h58ch5bdh9h548h9fh66fh6fh54ch5cfh5c8h575h9ch685h5bbh59chbbh559h7fh5dch599h99h66bh64bh645hbfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlxg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-946b65d6f-f4rwt_openstack(f46e1cd8-a675-4ee6-a25c-025c9a00c7f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.195796 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-946b65d6f-f4rwt" podUID="f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" Feb 18 19:38:21 crc kubenswrapper[4754]: E0218 19:38:21.914990 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-b2mr5" podUID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" Feb 18 19:38:22 crc kubenswrapper[4754]: E0218 19:38:22.576750 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 18 19:38:22 crc kubenswrapper[4754]: E0218 19:38:22.579675 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kb7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-5d7g9_openstack(5747d187-87f8-4baa-b0aa-65916db69601): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:38:22 crc kubenswrapper[4754]: E0218 19:38:22.580902 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-5d7g9" podUID="5747d187-87f8-4baa-b0aa-65916db69601" Feb 18 19:38:22 crc kubenswrapper[4754]: E0218 19:38:22.924185 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-5d7g9" podUID="5747d187-87f8-4baa-b0aa-65916db69601" Feb 18 19:38:23 crc kubenswrapper[4754]: I0218 19:38:23.935712 4754 generic.go:334] "Generic (PLEG): container finished" podID="b1abaf62-0594-4378-bea6-b5dc29d52241" containerID="7b975af1f6f66177a58d1deec670fbf1439aadfbcb2e00637d32275d7dd3dd0b" exitCode=0 Feb 18 19:38:23 crc kubenswrapper[4754]: I0218 19:38:23.935798 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kkj6c" event={"ID":"b1abaf62-0594-4378-bea6-b5dc29d52241","Type":"ContainerDied","Data":"7b975af1f6f66177a58d1deec670fbf1439aadfbcb2e00637d32275d7dd3dd0b"} Feb 18 19:38:25 crc kubenswrapper[4754]: I0218 19:38:25.843362 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 18 19:38:29 crc kubenswrapper[4754]: I0218 19:38:29.930756 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:38:29 crc kubenswrapper[4754]: I0218 19:38:29.939418 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.036423 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" event={"ID":"15826bdf-7267-4160-b0b8-f4eb3b76eae0","Type":"ContainerDied","Data":"c7f2377c2c4811d43f135490f988a8e8ab2d4966e83205b4cdf52de9c41e2265"} Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.036501 4754 scope.go:117] "RemoveContainer" containerID="b644ec0f34c805a147d0aaac4270a43d0a8c6e6b4e5e8c04ea5f090ec8694820" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.036665 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.046987 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-946b65d6f-f4rwt" event={"ID":"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0","Type":"ContainerDied","Data":"275f3d79255189e9eb26d8aca3fbbe86bc9ef5f69d1eec66146bb2c797e16e17"} Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.047332 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="275f3d79255189e9eb26d8aca3fbbe86bc9ef5f69d1eec66146bb2c797e16e17" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.048451 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.051826 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kkj6c" event={"ID":"b1abaf62-0594-4378-bea6-b5dc29d52241","Type":"ContainerDied","Data":"4fccfae1439010a409d42616b972226c9e83aa57e2a945d3fe8a229792088aa9"} Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.051865 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fccfae1439010a409d42616b972226c9e83aa57e2a945d3fe8a229792088aa9" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.059226 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.059294 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-wl8ql" event={"ID":"d046b6fd-1000-4f80-af20-d756adbab2ea","Type":"ContainerDied","Data":"e64df57d4ef3cb3cdba8fb6f1ca5ba9fa8bef07362b4cd35bd5fa4505eae3eea"} Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.059345 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e64df57d4ef3cb3cdba8fb6f1ca5ba9fa8bef07362b4cd35bd5fa4505eae3eea" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.059307 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-wl8ql" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103106 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-swift-storage-0\") pod \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103224 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-config\") pod \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103302 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-nb\") pod \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103359 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-config-data\") pod \"d046b6fd-1000-4f80-af20-d756adbab2ea\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103396 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6djg\" (UniqueName: \"kubernetes.io/projected/d046b6fd-1000-4f80-af20-d756adbab2ea-kube-api-access-q6djg\") pod \"d046b6fd-1000-4f80-af20-d756adbab2ea\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103451 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9zv\" (UniqueName: \"kubernetes.io/projected/15826bdf-7267-4160-b0b8-f4eb3b76eae0-kube-api-access-mb9zv\") pod \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103499 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-sb\") pod \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103620 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-combined-ca-bundle\") pod \"d046b6fd-1000-4f80-af20-d756adbab2ea\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103652 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-svc\") pod \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\" (UID: \"15826bdf-7267-4160-b0b8-f4eb3b76eae0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.103799 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-db-sync-config-data\") pod \"d046b6fd-1000-4f80-af20-d756adbab2ea\" (UID: \"d046b6fd-1000-4f80-af20-d756adbab2ea\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.112055 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d046b6fd-1000-4f80-af20-d756adbab2ea" (UID: "d046b6fd-1000-4f80-af20-d756adbab2ea"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.114577 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d046b6fd-1000-4f80-af20-d756adbab2ea-kube-api-access-q6djg" (OuterVolumeSpecName: "kube-api-access-q6djg") pod "d046b6fd-1000-4f80-af20-d756adbab2ea" (UID: "d046b6fd-1000-4f80-af20-d756adbab2ea"). InnerVolumeSpecName "kube-api-access-q6djg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.146483 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15826bdf-7267-4160-b0b8-f4eb3b76eae0-kube-api-access-mb9zv" (OuterVolumeSpecName: "kube-api-access-mb9zv") pod "15826bdf-7267-4160-b0b8-f4eb3b76eae0" (UID: "15826bdf-7267-4160-b0b8-f4eb3b76eae0"). InnerVolumeSpecName "kube-api-access-mb9zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.151851 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d046b6fd-1000-4f80-af20-d756adbab2ea" (UID: "d046b6fd-1000-4f80-af20-d756adbab2ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.171893 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-config" (OuterVolumeSpecName: "config") pod "15826bdf-7267-4160-b0b8-f4eb3b76eae0" (UID: "15826bdf-7267-4160-b0b8-f4eb3b76eae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.178677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15826bdf-7267-4160-b0b8-f4eb3b76eae0" (UID: "15826bdf-7267-4160-b0b8-f4eb3b76eae0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.188165 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15826bdf-7267-4160-b0b8-f4eb3b76eae0" (UID: "15826bdf-7267-4160-b0b8-f4eb3b76eae0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.190500 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15826bdf-7267-4160-b0b8-f4eb3b76eae0" (UID: "15826bdf-7267-4160-b0b8-f4eb3b76eae0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.191642 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-config-data" (OuterVolumeSpecName: "config-data") pod "d046b6fd-1000-4f80-af20-d756adbab2ea" (UID: "d046b6fd-1000-4f80-af20-d756adbab2ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.195455 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "15826bdf-7267-4160-b0b8-f4eb3b76eae0" (UID: "15826bdf-7267-4160-b0b8-f4eb3b76eae0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.206355 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlxg9\" (UniqueName: \"kubernetes.io/projected/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-kube-api-access-rlxg9\") pod \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.206447 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmt5x\" (UniqueName: \"kubernetes.io/projected/b1abaf62-0594-4378-bea6-b5dc29d52241-kube-api-access-wmt5x\") pod \"b1abaf62-0594-4378-bea6-b5dc29d52241\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.206483 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-scripts\") pod \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.206727 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-config\") pod \"b1abaf62-0594-4378-bea6-b5dc29d52241\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.206813 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-combined-ca-bundle\") pod \"b1abaf62-0594-4378-bea6-b5dc29d52241\" (UID: \"b1abaf62-0594-4378-bea6-b5dc29d52241\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.206855 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-config-data\") pod \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207017 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-logs\") pod \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207057 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-horizon-secret-key\") pod \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\" (UID: \"f46e1cd8-a675-4ee6-a25c-025c9a00c7f0\") " Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207554 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207595 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207608 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207629 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207643 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207654 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207667 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d046b6fd-1000-4f80-af20-d756adbab2ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207677 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6djg\" (UniqueName: \"kubernetes.io/projected/d046b6fd-1000-4f80-af20-d756adbab2ea-kube-api-access-q6djg\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207690 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9zv\" (UniqueName: \"kubernetes.io/projected/15826bdf-7267-4160-b0b8-f4eb3b76eae0-kube-api-access-mb9zv\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207699 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15826bdf-7267-4160-b0b8-f4eb3b76eae0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.207830 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-scripts" (OuterVolumeSpecName: "scripts") pod "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" (UID: "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.208029 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-config-data" (OuterVolumeSpecName: "config-data") pod "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" (UID: "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.208440 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-logs" (OuterVolumeSpecName: "logs") pod "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" (UID: "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.212781 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1abaf62-0594-4378-bea6-b5dc29d52241-kube-api-access-wmt5x" (OuterVolumeSpecName: "kube-api-access-wmt5x") pod "b1abaf62-0594-4378-bea6-b5dc29d52241" (UID: "b1abaf62-0594-4378-bea6-b5dc29d52241"). InnerVolumeSpecName "kube-api-access-wmt5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.214822 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" (UID: "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.215134 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-kube-api-access-rlxg9" (OuterVolumeSpecName: "kube-api-access-rlxg9") pod "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" (UID: "f46e1cd8-a675-4ee6-a25c-025c9a00c7f0"). InnerVolumeSpecName "kube-api-access-rlxg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.239116 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-config" (OuterVolumeSpecName: "config") pod "b1abaf62-0594-4378-bea6-b5dc29d52241" (UID: "b1abaf62-0594-4378-bea6-b5dc29d52241"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.244450 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1abaf62-0594-4378-bea6-b5dc29d52241" (UID: "b1abaf62-0594-4378-bea6-b5dc29d52241"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309376 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309430 4754 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309443 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlxg9\" (UniqueName: \"kubernetes.io/projected/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-kube-api-access-rlxg9\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309458 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmt5x\" (UniqueName: \"kubernetes.io/projected/b1abaf62-0594-4378-bea6-b5dc29d52241-kube-api-access-wmt5x\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309470 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309484 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309494 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1abaf62-0594-4378-bea6-b5dc29d52241-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.309504 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.366569 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-gr9m6"] Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.376822 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-gr9m6"] Feb 18 19:38:30 crc kubenswrapper[4754]: I0218 19:38:30.848520 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-gr9m6" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.068178 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-946b65d6f-f4rwt" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.069626 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kkj6c" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.117498 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-946b65d6f-f4rwt"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.137449 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-946b65d6f-f4rwt"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345050 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.345622 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d046b6fd-1000-4f80-af20-d756adbab2ea" containerName="watcher-db-sync" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345642 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d046b6fd-1000-4f80-af20-d756adbab2ea" containerName="watcher-db-sync" Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.345662 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1abaf62-0594-4378-bea6-b5dc29d52241" containerName="neutron-db-sync" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345671 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1abaf62-0594-4378-bea6-b5dc29d52241" containerName="neutron-db-sync" Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.345683 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345690 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.345705 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="init" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345712 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="init" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345884 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1abaf62-0594-4378-bea6-b5dc29d52241" containerName="neutron-db-sync" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345900 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d046b6fd-1000-4f80-af20-d756adbab2ea" containerName="watcher-db-sync" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.345925 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" containerName="dnsmasq-dns" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.347045 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.350692 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.350961 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-z76fp" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.366393 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.368467 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.374367 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.387995 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.407257 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.430077 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p8ttj"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.439094 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ee2423-3834-4e72-98f8-7c799d966afa-logs\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.439196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-config-data\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.439309 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.439327 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.439367 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wpz\" (UniqueName: \"kubernetes.io/projected/33ee2423-3834-4e72-98f8-7c799d966afa-kube-api-access-v8wpz\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.464264 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jlqj8"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.477593 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jlqj8"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.477764 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.497635 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.524590 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.561841 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.591906 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7n6\" (UniqueName: \"kubernetes.io/projected/68c4cf96-8818-4c3b-b6f4-6b61f985865e-kube-api-access-6j7n6\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.592063 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.592113 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.592235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8wpz\" (UniqueName: \"kubernetes.io/projected/33ee2423-3834-4e72-98f8-7c799d966afa-kube-api-access-v8wpz\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.595377 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c4cf96-8818-4c3b-b6f4-6b61f985865e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.597314 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.598085 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ee2423-3834-4e72-98f8-7c799d966afa-logs\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.600459 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-config-data\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.600547 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c4cf96-8818-4c3b-b6f4-6b61f985865e-config-data\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.600688 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68c4cf96-8818-4c3b-b6f4-6b61f985865e-logs\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.601371 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ee2423-3834-4e72-98f8-7c799d966afa-logs\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.615369 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.619267 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-config-data\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.669610 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8wpz\" (UniqueName: \"kubernetes.io/projected/33ee2423-3834-4e72-98f8-7c799d966afa-kube-api-access-v8wpz\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.682416 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.692597 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c6fb7f68-h72q7"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.694569 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.697613 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.701338 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.701503 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fkl4d" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.701843 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.701522 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704299 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c4cf96-8818-4c3b-b6f4-6b61f985865e-config-data\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704389 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-svc\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704421 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68c4cf96-8818-4c3b-b6f4-6b61f985865e-logs\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704451 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-config\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593f74ff-3d0a-4bf8-be87-e34fdda1b202-logs\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704510 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704545 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7n6\" (UniqueName: \"kubernetes.io/projected/68c4cf96-8818-4c3b-b6f4-6b61f985865e-kube-api-access-6j7n6\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704578 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-config\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704609 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-ovndb-tls-certs\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704660 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-combined-ca-bundle\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704686 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbn6g\" (UniqueName: \"kubernetes.io/projected/593f74ff-3d0a-4bf8-be87-e34fdda1b202-kube-api-access-nbn6g\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704727 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-httpd-config\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704757 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xss4g\" (UniqueName: \"kubernetes.io/projected/272c8937-1ebd-44f6-8514-030c1be0af24-kube-api-access-xss4g\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704784 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704827 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c4cf96-8818-4c3b-b6f4-6b61f985865e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704887 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7smpj\" (UniqueName: \"kubernetes.io/projected/5de1542f-0d57-4a6e-bfac-7557e6dda66e-kube-api-access-7smpj\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704919 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704946 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.704974 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-config-data\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.705007 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.705638 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68c4cf96-8818-4c3b-b6f4-6b61f985865e-logs\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.711482 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c4cf96-8818-4c3b-b6f4-6b61f985865e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.717205 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c6fb7f68-h72q7"] Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.724762 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c4cf96-8818-4c3b-b6f4-6b61f985865e-config-data\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.731772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7n6\" (UniqueName: \"kubernetes.io/projected/68c4cf96-8818-4c3b-b6f4-6b61f985865e-kube-api-access-6j7n6\") pod \"watcher-applier-0\" (UID: \"68c4cf96-8818-4c3b-b6f4-6b61f985865e\") " pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.739307 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.790983 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.791238 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwmnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-79xk6_openstack(fc061809-61de-4d52-909b-e2d4957dc4a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:38:31 crc kubenswrapper[4754]: E0218 19:38:31.792793 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-79xk6" podUID="fc061809-61de-4d52-909b-e2d4957dc4a4" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-svc\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-config\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806513 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593f74ff-3d0a-4bf8-be87-e34fdda1b202-logs\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806541 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-config\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806596 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-ovndb-tls-certs\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806628 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-combined-ca-bundle\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806651 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbn6g\" (UniqueName: \"kubernetes.io/projected/593f74ff-3d0a-4bf8-be87-e34fdda1b202-kube-api-access-nbn6g\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806675 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-httpd-config\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806696 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xss4g\" (UniqueName: \"kubernetes.io/projected/272c8937-1ebd-44f6-8514-030c1be0af24-kube-api-access-xss4g\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806715 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806770 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7smpj\" (UniqueName: \"kubernetes.io/projected/5de1542f-0d57-4a6e-bfac-7557e6dda66e-kube-api-access-7smpj\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806796 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806825 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806848 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-config-data\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.806870 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.809954 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-config\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.810701 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-svc\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.811940 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.812170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593f74ff-3d0a-4bf8-be87-e34fdda1b202-logs\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.812353 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.812554 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.814665 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.816686 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-config\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.817373 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-config-data\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.819375 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-httpd-config\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.821229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-ovndb-tls-certs\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.821596 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.821802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-combined-ca-bundle\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.831819 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbn6g\" (UniqueName: \"kubernetes.io/projected/593f74ff-3d0a-4bf8-be87-e34fdda1b202-kube-api-access-nbn6g\") pod \"watcher-decision-engine-0\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.834563 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7smpj\" (UniqueName: \"kubernetes.io/projected/5de1542f-0d57-4a6e-bfac-7557e6dda66e-kube-api-access-7smpj\") pod \"neutron-7c6fb7f68-h72q7\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:31 crc kubenswrapper[4754]: I0218 19:38:31.834849 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xss4g\" (UniqueName: \"kubernetes.io/projected/272c8937-1ebd-44f6-8514-030c1be0af24-kube-api-access-xss4g\") pod \"dnsmasq-dns-6b7b667979-jlqj8\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:32 crc kubenswrapper[4754]: I0218 19:38:32.074863 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:38:32 crc kubenswrapper[4754]: E0218 19:38:32.080297 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-79xk6" podUID="fc061809-61de-4d52-909b-e2d4957dc4a4" Feb 18 19:38:32 crc kubenswrapper[4754]: I0218 19:38:32.093723 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:32 crc kubenswrapper[4754]: I0218 19:38:32.117672 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:32 crc kubenswrapper[4754]: I0218 19:38:32.223086 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15826bdf-7267-4160-b0b8-f4eb3b76eae0" path="/var/lib/kubelet/pods/15826bdf-7267-4160-b0b8-f4eb3b76eae0/volumes" Feb 18 19:38:32 crc kubenswrapper[4754]: I0218 19:38:32.225644 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46e1cd8-a675-4ee6-a25c-025c9a00c7f0" path="/var/lib/kubelet/pods/f46e1cd8-a675-4ee6-a25c-025c9a00c7f0/volumes" Feb 18 19:38:32 crc kubenswrapper[4754]: I0218 19:38:32.331303 4754 scope.go:117] "RemoveContainer" containerID="e43b31700a72c13d87c36254c4188d3a341c970e897aa44690219eb0d312ce86" Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.142865 4754 generic.go:334] "Generic (PLEG): container finished" podID="b7e2beb9-4b67-4852-9cd8-11ac78684181" containerID="2a18e11091b54cc7b7c6e5129d620ee764b34c3e11e82ec3acb55aaf83d96e9e" exitCode=0 Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.143630 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" event={"ID":"b7e2beb9-4b67-4852-9cd8-11ac78684181","Type":"ContainerDied","Data":"2a18e11091b54cc7b7c6e5129d620ee764b34c3e11e82ec3acb55aaf83d96e9e"} Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.220478 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f7766589b-gh94d"] Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.422427 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.638724 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9577ccdb8-nfcx9"] Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.656350 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kd677"] Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.720203 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.925497 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:38:33 crc kubenswrapper[4754]: I0218 19:38:33.946296 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:33 crc kubenswrapper[4754]: W0218 19:38:33.954933 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68c4cf96_8818_4c3b_b6f4_6b61f985865e.slice/crio-649c28c5ecc84d23990c96ef58bcced4190a345f37f943b30fec9e926d8f85ed WatchSource:0}: Error finding container 649c28c5ecc84d23990c96ef58bcced4190a345f37f943b30fec9e926d8f85ed: Status 404 returned error can't find the container with id 649c28c5ecc84d23990c96ef58bcced4190a345f37f943b30fec9e926d8f85ed Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.003243 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.078784 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.147720 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jlqj8"] Feb 18 19:38:34 crc kubenswrapper[4754]: W0218 19:38:34.196804 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod272c8937_1ebd_44f6_8514_030c1be0af24.slice/crio-0695c6243300be266c9cda6cfa6e6658aabdce4587af422208e92dd9949088b6 WatchSource:0}: Error finding container 0695c6243300be266c9cda6cfa6e6658aabdce4587af422208e92dd9949088b6: Status 404 returned error can't find the container with id 0695c6243300be266c9cda6cfa6e6658aabdce4587af422208e92dd9949088b6 Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.206342 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86445745d9-8xbmt"] Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.210797 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.226428 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.226686 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.250861 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.330647 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-sb\") pod \"b7e2beb9-4b67-4852-9cd8-11ac78684181\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.330718 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-nb\") pod \"b7e2beb9-4b67-4852-9cd8-11ac78684181\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.330742 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-config\") pod \"b7e2beb9-4b67-4852-9cd8-11ac78684181\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.330889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbg6c\" (UniqueName: \"kubernetes.io/projected/b7e2beb9-4b67-4852-9cd8-11ac78684181-kube-api-access-zbg6c\") pod \"b7e2beb9-4b67-4852-9cd8-11ac78684181\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.330939 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-svc\") pod \"b7e2beb9-4b67-4852-9cd8-11ac78684181\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331015 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-swift-storage-0\") pod \"b7e2beb9-4b67-4852-9cd8-11ac78684181\" (UID: \"b7e2beb9-4b67-4852-9cd8-11ac78684181\") " Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-config\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331381 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-ovndb-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331412 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-public-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfbw\" (UniqueName: \"kubernetes.io/projected/5d382047-c43a-4f82-8982-106e10d65430-kube-api-access-7qfbw\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331516 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-httpd-config\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-internal-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.331693 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-combined-ca-bundle\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.342633 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e2beb9-4b67-4852-9cd8-11ac78684181-kube-api-access-zbg6c" (OuterVolumeSpecName: "kube-api-access-zbg6c") pod "b7e2beb9-4b67-4852-9cd8-11ac78684181" (UID: "b7e2beb9-4b67-4852-9cd8-11ac78684181"). InnerVolumeSpecName "kube-api-access-zbg6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.375581 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f7766589b-gh94d" podStartSLOduration=22.375556134 podStartE2EDuration="22.375556134s" podCreationTimestamp="2026-02-18 19:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:34.352699037 +0000 UTC m=+1216.803111833" watchObservedRunningTime="2026-02-18 19:38:34.375556134 +0000 UTC m=+1216.825968930" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393064 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"68c4cf96-8818-4c3b-b6f4-6b61f985865e","Type":"ContainerStarted","Data":"649c28c5ecc84d23990c96ef58bcced4190a345f37f943b30fec9e926d8f85ed"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393112 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86445745d9-8xbmt"] Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393129 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c","Type":"ContainerStarted","Data":"83c74c6b5121e109ee3c15c06bf3d35dd08c6a278118d31a15c2f8b6cf34d84b"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393160 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c6fb7f68-h72q7"] Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393181 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9dc90b97-4fa8-4e28-9c5a-422cdbe50695","Type":"ContainerStarted","Data":"5ea308c619d39af0f7416707862844a0d7359def56a38df2166698dce4300320"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393197 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7766589b-gh94d" event={"ID":"c99f043f-84fb-4825-8ba7-c918263e6c7f","Type":"ContainerStarted","Data":"7ad6d935c31c4a41096116f9831315cd554226731e874722795ee996818ccd68"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393210 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7766589b-gh94d" event={"ID":"c99f043f-84fb-4825-8ba7-c918263e6c7f","Type":"ContainerStarted","Data":"e55b7142c25d2c339ca280e77353b5829a1d05caa6d7d0694acfdbae4eb52c62"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393219 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7766589b-gh94d" event={"ID":"c99f043f-84fb-4825-8ba7-c918263e6c7f","Type":"ContainerStarted","Data":"1ec3cf5ea51a9dacf916cd383c147e5143487ca3b372f652e8dc780e770fc471"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393229 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd677" event={"ID":"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8","Type":"ContainerStarted","Data":"1179ddce2632605f488bbeb4ce69ab9d06dd312480491110b65a1365190516b2"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.393241 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"742e0717-1560-424d-b0d3-4e7b46f8ec8c","Type":"ContainerStarted","Data":"88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.408031 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-config" (OuterVolumeSpecName: "config") pod "b7e2beb9-4b67-4852-9cd8-11ac78684181" (UID: "b7e2beb9-4b67-4852-9cd8-11ac78684181"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.409006 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" event={"ID":"b7e2beb9-4b67-4852-9cd8-11ac78684181","Type":"ContainerDied","Data":"ddcc3034bed9db28bc0c64a9448d5f909951ba4ddf9423ec066223d31802765b"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.409056 4754 scope.go:117] "RemoveContainer" containerID="2a18e11091b54cc7b7c6e5129d620ee764b34c3e11e82ec3acb55aaf83d96e9e" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.409075 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p8ttj" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.421467 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7e2beb9-4b67-4852-9cd8-11ac78684181" (UID: "b7e2beb9-4b67-4852-9cd8-11ac78684181"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434079 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-combined-ca-bundle\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-config\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434238 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-ovndb-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434271 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-public-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434401 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfbw\" (UniqueName: \"kubernetes.io/projected/5d382047-c43a-4f82-8982-106e10d65430-kube-api-access-7qfbw\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434460 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-httpd-config\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-internal-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434646 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434957 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbg6c\" (UniqueName: \"kubernetes.io/projected/b7e2beb9-4b67-4852-9cd8-11ac78684181-kube-api-access-zbg6c\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.434985 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.443774 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df465b89-p9cqb" event={"ID":"0c9800a6-c17a-482a-8f95-134b2df4afba","Type":"ContainerStarted","Data":"57288089d738ace640dc14fd322768426539890741742e8c906acf8333882060"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.443823 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df465b89-p9cqb" event={"ID":"0c9800a6-c17a-482a-8f95-134b2df4afba","Type":"ContainerStarted","Data":"a0015bfcf589b05141ae04d31583003f5a20afafe3728c93e026f062a3e2c7b9"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.443970 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69df465b89-p9cqb" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon-log" containerID="cri-o://a0015bfcf589b05141ae04d31583003f5a20afafe3728c93e026f062a3e2c7b9" gracePeriod=30 Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.444361 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69df465b89-p9cqb" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon" containerID="cri-o://57288089d738ace640dc14fd322768426539890741742e8c906acf8333882060" gracePeriod=30 Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.447575 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-ovndb-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.452655 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-httpd-config\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.453720 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-internal-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.455103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-public-tls-certs\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.459133 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b7e2beb9-4b67-4852-9cd8-11ac78684181" (UID: "b7e2beb9-4b67-4852-9cd8-11ac78684181"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.459800 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-888954555-c8j52" event={"ID":"78669beb-cdbe-41e0-8897-3bcf16dc9bdb","Type":"ContainerStarted","Data":"e443f17e3a218266fdd89edbb2dc4b3bb8154614b82f61c282613b1c175747c9"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.459865 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-888954555-c8j52" event={"ID":"78669beb-cdbe-41e0-8897-3bcf16dc9bdb","Type":"ContainerStarted","Data":"b599a85daa6653dda778e99a7566028a8701f531634e4f9143f9bfdf3d1124f8"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.460027 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-888954555-c8j52" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon-log" containerID="cri-o://b599a85daa6653dda778e99a7566028a8701f531634e4f9143f9bfdf3d1124f8" gracePeriod=30 Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.460132 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-888954555-c8j52" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon" containerID="cri-o://e443f17e3a218266fdd89edbb2dc4b3bb8154614b82f61c282613b1c175747c9" gracePeriod=30 Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.466999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-config\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.473079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-combined-ca-bundle\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.475056 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9577ccdb8-nfcx9" event={"ID":"8afcabe6-a035-4ecd-8522-93afd1691f25","Type":"ContainerStarted","Data":"23a146e392a33561291555e5b62b09042e776d4f7b8a02525e4e5b6ed6d6dd1c"} Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.497992 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69df465b89-p9cqb" podStartSLOduration=6.220541515 podStartE2EDuration="34.497961145s" podCreationTimestamp="2026-02-18 19:38:00 +0000 UTC" firstStartedPulling="2026-02-18 19:38:01.663547441 +0000 UTC m=+1184.113960237" lastFinishedPulling="2026-02-18 19:38:29.940967071 +0000 UTC m=+1212.391379867" observedRunningTime="2026-02-18 19:38:34.465664628 +0000 UTC m=+1216.916077434" watchObservedRunningTime="2026-02-18 19:38:34.497961145 +0000 UTC m=+1216.948373941" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.499683 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfbw\" (UniqueName: \"kubernetes.io/projected/5d382047-c43a-4f82-8982-106e10d65430-kube-api-access-7qfbw\") pod \"neutron-86445745d9-8xbmt\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.501821 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b7e2beb9-4b67-4852-9cd8-11ac78684181" (UID: "b7e2beb9-4b67-4852-9cd8-11ac78684181"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.509155 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-888954555-c8j52" podStartSLOduration=4.46050025 podStartE2EDuration="35.50910829s" podCreationTimestamp="2026-02-18 19:37:59 +0000 UTC" firstStartedPulling="2026-02-18 19:38:01.294068925 +0000 UTC m=+1183.744481721" lastFinishedPulling="2026-02-18 19:38:32.342676965 +0000 UTC m=+1214.793089761" observedRunningTime="2026-02-18 19:38:34.493040003 +0000 UTC m=+1216.943452809" watchObservedRunningTime="2026-02-18 19:38:34.50910829 +0000 UTC m=+1216.959521086" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.515959 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b7e2beb9-4b67-4852-9cd8-11ac78684181" (UID: "b7e2beb9-4b67-4852-9cd8-11ac78684181"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.536921 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.537280 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.537331 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b7e2beb9-4b67-4852-9cd8-11ac78684181-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.709281 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.830411 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p8ttj"] Feb 18 19:38:34 crc kubenswrapper[4754]: I0218 19:38:34.838687 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p8ttj"] Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.514326 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"593f74ff-3d0a-4bf8-be87-e34fdda1b202","Type":"ContainerStarted","Data":"78c2cbf475b38e1ea8d7b255685f2bf7e91f10125d22acdc912875f1df8f89bf"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.519499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd677" event={"ID":"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8","Type":"ContainerStarted","Data":"da88686112b9c3b27980150364cdc419b2bf6726c744b7995d433fb4e9fe0626"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.548288 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6fb7f68-h72q7" event={"ID":"5de1542f-0d57-4a6e-bfac-7557e6dda66e","Type":"ContainerStarted","Data":"34b238d4c32258d2339dd3df201324c5556c452a2d58b11510e0b1c540ce9f9a"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.548343 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6fb7f68-h72q7" event={"ID":"5de1542f-0d57-4a6e-bfac-7557e6dda66e","Type":"ContainerStarted","Data":"15243c08b4401352a5c40991de1875e41dd23e3ad2879feee8a08de5d3e7175a"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.548357 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6fb7f68-h72q7" event={"ID":"5de1542f-0d57-4a6e-bfac-7557e6dda66e","Type":"ContainerStarted","Data":"0fdfd1421f46796e81fcaded9c1195eddebe168bc6e88aca8fa1c793a20f0d24"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.549552 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.559829 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kd677" podStartSLOduration=26.559798952 podStartE2EDuration="26.559798952s" podCreationTimestamp="2026-02-18 19:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:35.555982184 +0000 UTC m=+1218.006394980" watchObservedRunningTime="2026-02-18 19:38:35.559798952 +0000 UTC m=+1218.010211748" Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.597685 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c6fb7f68-h72q7" podStartSLOduration=4.597657233 podStartE2EDuration="4.597657233s" podCreationTimestamp="2026-02-18 19:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:35.589521 +0000 UTC m=+1218.039933796" watchObservedRunningTime="2026-02-18 19:38:35.597657233 +0000 UTC m=+1218.048070029" Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.602887 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c","Type":"ContainerStarted","Data":"1cb7ce5e8b5646ce0c43541b73f1a6f8d60e21f5d162a034c87fc8d8614cb73d"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.651810 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ee2423-3834-4e72-98f8-7c799d966afa","Type":"ContainerStarted","Data":"33f5eb8715abf8d336e1a2d97c7c6dda1e28eb23f18683e78b4b2b9992e1c35c"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.651861 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ee2423-3834-4e72-98f8-7c799d966afa","Type":"ContainerStarted","Data":"31a673fedb94d39d14fb873356fe0387209e37df8a6c39f013e7166de6652654"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.651877 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ee2423-3834-4e72-98f8-7c799d966afa","Type":"ContainerStarted","Data":"03ea4162a8d2afad00d31cff64f23ffc11d56b30e3ded30ec7418aec3bb4b44d"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.653997 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.667528 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": dial tcp 10.217.0.163:9322: connect: connection refused" Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.673710 4754 generic.go:334] "Generic (PLEG): container finished" podID="272c8937-1ebd-44f6-8514-030c1be0af24" containerID="66b4dcfe75f794c2ace7f64c76a5d413ee5a5420af6afdc0e8a9008d4642c345" exitCode=0 Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.673786 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" event={"ID":"272c8937-1ebd-44f6-8514-030c1be0af24","Type":"ContainerDied","Data":"66b4dcfe75f794c2ace7f64c76a5d413ee5a5420af6afdc0e8a9008d4642c345"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.673816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" event={"ID":"272c8937-1ebd-44f6-8514-030c1be0af24","Type":"ContainerStarted","Data":"0695c6243300be266c9cda6cfa6e6658aabdce4587af422208e92dd9949088b6"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.697478 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.697458826 podStartE2EDuration="4.697458826s" podCreationTimestamp="2026-02-18 19:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:35.673532516 +0000 UTC m=+1218.123945312" watchObservedRunningTime="2026-02-18 19:38:35.697458826 +0000 UTC m=+1218.147871622" Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.713468 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9577ccdb8-nfcx9" event={"ID":"8afcabe6-a035-4ecd-8522-93afd1691f25","Type":"ContainerStarted","Data":"88b802d23292dd5c618c19202d02440a41dec604bfd64271e8807d8dc39458ab"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.713542 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9577ccdb8-nfcx9" event={"ID":"8afcabe6-a035-4ecd-8522-93afd1691f25","Type":"ContainerStarted","Data":"6a052b7efef88a77b09203dde939054683d4837942b9da7c71526e02fa3db66f"} Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.728759 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86445745d9-8xbmt"] Feb 18 19:38:35 crc kubenswrapper[4754]: I0218 19:38:35.755259 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9577ccdb8-nfcx9" podStartSLOduration=23.75523198 podStartE2EDuration="23.75523198s" podCreationTimestamp="2026-02-18 19:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:35.737023578 +0000 UTC m=+1218.187436374" watchObservedRunningTime="2026-02-18 19:38:35.75523198 +0000 UTC m=+1218.205644776" Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.235891 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e2beb9-4b67-4852-9cd8-11ac78684181" path="/var/lib/kubelet/pods/b7e2beb9-4b67-4852-9cd8-11ac78684181/volumes" Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.700831 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.727063 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445745d9-8xbmt" event={"ID":"5d382047-c43a-4f82-8982-106e10d65430","Type":"ContainerStarted","Data":"5b028b6d0b86ce82265e56b79099751123607adb39ac7173710f1e5b2560a815"} Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.728832 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c","Type":"ContainerStarted","Data":"b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711"} Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.729001 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-log" containerID="cri-o://1cb7ce5e8b5646ce0c43541b73f1a6f8d60e21f5d162a034c87fc8d8614cb73d" gracePeriod=30 Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.729717 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-httpd" containerID="cri-o://b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711" gracePeriod=30 Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.732742 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9dc90b97-4fa8-4e28-9c5a-422cdbe50695","Type":"ContainerStarted","Data":"fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e"} Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.842173 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:36 crc kubenswrapper[4754]: I0218 19:38:36.842251 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:37 crc kubenswrapper[4754]: E0218 19:38:37.015476 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefafff3_b6ff_4bbd_b9ba_3ebb8db4850c.slice/crio-b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefafff3_b6ff_4bbd_b9ba_3ebb8db4850c.slice/crio-conmon-b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.249214 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=32.249192469 podStartE2EDuration="32.249192469s" podCreationTimestamp="2026-02-18 19:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:36.763302736 +0000 UTC m=+1219.213715542" watchObservedRunningTime="2026-02-18 19:38:37.249192469 +0000 UTC m=+1219.699605265" Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.782060 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445745d9-8xbmt" event={"ID":"5d382047-c43a-4f82-8982-106e10d65430","Type":"ContainerStarted","Data":"707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee"} Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.785198 4754 generic.go:334] "Generic (PLEG): container finished" podID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerID="b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711" exitCode=0 Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.785235 4754 generic.go:334] "Generic (PLEG): container finished" podID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerID="1cb7ce5e8b5646ce0c43541b73f1a6f8d60e21f5d162a034c87fc8d8614cb73d" exitCode=143 Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.785278 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c","Type":"ContainerDied","Data":"b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711"} Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.785311 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c","Type":"ContainerDied","Data":"1cb7ce5e8b5646ce0c43541b73f1a6f8d60e21f5d162a034c87fc8d8614cb73d"} Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.786533 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" event={"ID":"272c8937-1ebd-44f6-8514-030c1be0af24","Type":"ContainerStarted","Data":"b53c6d0af1d2f8757741b56793f8d09287a144ddf30d6d4436ad6a9111373ad8"} Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.787730 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.793741 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9dc90b97-4fa8-4e28-9c5a-422cdbe50695","Type":"ContainerStarted","Data":"454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030"} Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.793757 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.794613 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-log" containerID="cri-o://fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e" gracePeriod=30 Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.794744 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-httpd" containerID="cri-o://454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030" gracePeriod=30 Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.812660 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" podStartSLOduration=6.812641337 podStartE2EDuration="6.812641337s" podCreationTimestamp="2026-02-18 19:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:37.810075788 +0000 UTC m=+1220.260488584" watchObservedRunningTime="2026-02-18 19:38:37.812641337 +0000 UTC m=+1220.263054133" Feb 18 19:38:37 crc kubenswrapper[4754]: I0218 19:38:37.850342 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.850322971 podStartE2EDuration="32.850322971s" podCreationTimestamp="2026-02-18 19:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:37.83767962 +0000 UTC m=+1220.288092426" watchObservedRunningTime="2026-02-18 19:38:37.850322971 +0000 UTC m=+1220.300735767" Feb 18 19:38:38 crc kubenswrapper[4754]: I0218 19:38:38.807226 4754 generic.go:334] "Generic (PLEG): container finished" podID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerID="fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e" exitCode=143 Feb 18 19:38:38 crc kubenswrapper[4754]: I0218 19:38:38.807991 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9dc90b97-4fa8-4e28-9c5a-422cdbe50695","Type":"ContainerDied","Data":"fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e"} Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.143512 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220629 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-httpd-run\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220698 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-scripts\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220754 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whzw2\" (UniqueName: \"kubernetes.io/projected/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-kube-api-access-whzw2\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220782 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-config-data\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220827 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-logs\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220898 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.220937 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-combined-ca-bundle\") pod \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\" (UID: \"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.221695 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.229668 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-logs" (OuterVolumeSpecName: "logs") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.230590 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.230642 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.234696 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-scripts" (OuterVolumeSpecName: "scripts") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.245909 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.249363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-kube-api-access-whzw2" (OuterVolumeSpecName: "kube-api-access-whzw2") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "kube-api-access-whzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.333634 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.334121 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.335369 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whzw2\" (UniqueName: \"kubernetes.io/projected/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-kube-api-access-whzw2\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.335399 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.373426 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-config-data" (OuterVolumeSpecName: "config-data") pod "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" (UID: "befafff3-b6ff-4bbd-b9ba-3ebb8db4850c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.392097 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.437642 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.437688 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.437703 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.667911 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.735042 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746293 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-combined-ca-bundle\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746393 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-httpd-run\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746599 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r2jp\" (UniqueName: \"kubernetes.io/projected/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-kube-api-access-5r2jp\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746650 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746735 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-config-data\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746836 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-scripts\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.746881 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-logs\") pod \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\" (UID: \"9dc90b97-4fa8-4e28-9c5a-422cdbe50695\") " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.748440 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-logs" (OuterVolumeSpecName: "logs") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.759278 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.788441 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-kube-api-access-5r2jp" (OuterVolumeSpecName: "kube-api-access-5r2jp") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "kube-api-access-5r2jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.795137 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-scripts" (OuterVolumeSpecName: "scripts") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.796152 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.850912 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.850961 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r2jp\" (UniqueName: \"kubernetes.io/projected/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-kube-api-access-5r2jp\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.850989 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.851006 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.851018 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.866029 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.866078 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"befafff3-b6ff-4bbd-b9ba-3ebb8db4850c","Type":"ContainerDied","Data":"83c74c6b5121e109ee3c15c06bf3d35dd08c6a278118d31a15c2f8b6cf34d84b"} Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.866214 4754 scope.go:117] "RemoveContainer" containerID="b96b16f9c892877fe419b90d5d9dc0e13808a1be2c5a3d378119224ae82e9711" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.888730 4754 generic.go:334] "Generic (PLEG): container finished" podID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerID="454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.889066 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9dc90b97-4fa8-4e28-9c5a-422cdbe50695","Type":"ContainerDied","Data":"454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030"} Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.889106 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9dc90b97-4fa8-4e28-9c5a-422cdbe50695","Type":"ContainerDied","Data":"5ea308c619d39af0f7416707862844a0d7359def56a38df2166698dce4300320"} Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.889251 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.903024 4754 generic.go:334] "Generic (PLEG): container finished" podID="bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" containerID="da88686112b9c3b27980150364cdc419b2bf6726c744b7995d433fb4e9fe0626" exitCode=0 Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.903181 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.903216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd677" event={"ID":"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8","Type":"ContainerDied","Data":"da88686112b9c3b27980150364cdc419b2bf6726c744b7995d433fb4e9fe0626"} Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.907181 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445745d9-8xbmt" event={"ID":"5d382047-c43a-4f82-8982-106e10d65430","Type":"ContainerStarted","Data":"57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213"} Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.910346 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-888954555-c8j52" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.923176 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.953926 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.953964 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:39 crc kubenswrapper[4754]: I0218 19:38:39.983490 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-config-data" (OuterVolumeSpecName: "config-data") pod "9dc90b97-4fa8-4e28-9c5a-422cdbe50695" (UID: "9dc90b97-4fa8-4e28-9c5a-422cdbe50695"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.056460 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc90b97-4fa8-4e28-9c5a-422cdbe50695-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.073581 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.083026 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.113229 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: E0218 19:38:40.113745 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-log" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.113766 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-log" Feb 18 19:38:40 crc kubenswrapper[4754]: E0218 19:38:40.113779 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-httpd" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.113786 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-httpd" Feb 18 19:38:40 crc kubenswrapper[4754]: E0218 19:38:40.113797 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-httpd" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.113804 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-httpd" Feb 18 19:38:40 crc kubenswrapper[4754]: E0218 19:38:40.113818 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e2beb9-4b67-4852-9cd8-11ac78684181" containerName="init" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.113824 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e2beb9-4b67-4852-9cd8-11ac78684181" containerName="init" Feb 18 19:38:40 crc kubenswrapper[4754]: E0218 19:38:40.113857 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-log" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.113864 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-log" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.114049 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-httpd" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.114058 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-httpd" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.114104 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" containerName="glance-log" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.114114 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e2beb9-4b67-4852-9cd8-11ac78684181" containerName="init" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.114123 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" containerName="glance-log" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.149199 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.149367 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.224824 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.225124 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.271356 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.271909 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.271941 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.274384 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.274461 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-logs\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.275309 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz69w\" (UniqueName: \"kubernetes.io/projected/f30c68d8-931c-4aca-a099-c7c969a92b61-kube-api-access-xz69w\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.284675 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.284717 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.285385 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befafff3-b6ff-4bbd-b9ba-3ebb8db4850c" path="/var/lib/kubelet/pods/befafff3-b6ff-4bbd-b9ba-3ebb8db4850c/volumes" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.323229 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.347817 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.360323 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.363917 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.373488 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.374442 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388288 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz69w\" (UniqueName: \"kubernetes.io/projected/f30c68d8-931c-4aca-a099-c7c969a92b61-kube-api-access-xz69w\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388376 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388398 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388441 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388474 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388495 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388520 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.388553 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-logs\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.389293 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-logs\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.389325 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.390377 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.397322 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.402182 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.402448 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.402935 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.412923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.437848 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz69w\" (UniqueName: \"kubernetes.io/projected/f30c68d8-931c-4aca-a099-c7c969a92b61-kube-api-access-xz69w\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.477451 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.496010 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gpg\" (UniqueName: \"kubernetes.io/projected/e693768a-995f-412b-bede-020e44d17d03-kube-api-access-m4gpg\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.496064 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.496114 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.496199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-scripts\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.496259 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-logs\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.496281 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.502879 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-config-data\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.502951 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.508559 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607046 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-logs\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607121 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607277 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-config-data\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607340 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607386 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gpg\" (UniqueName: \"kubernetes.io/projected/e693768a-995f-412b-bede-020e44d17d03-kube-api-access-m4gpg\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607429 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607508 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607576 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-scripts\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-logs\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.607818 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.608511 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.615372 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.618226 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-scripts\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.620297 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.620739 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-config-data\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.632989 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gpg\" (UniqueName: \"kubernetes.io/projected/e693768a-995f-412b-bede-020e44d17d03-kube-api-access-m4gpg\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.657701 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.792645 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.853728 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.923692 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"68c4cf96-8818-4c3b-b6f4-6b61f985865e","Type":"ContainerStarted","Data":"c7a5d1e0704fc463a3ff4544abb0489f36699f077cca49b9e5561778b3b90bc5"} Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.928805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b2mr5" event={"ID":"9a109a6c-ffaa-479e-95e6-ef033aec4b27","Type":"ContainerStarted","Data":"0f823c1d6c10e70ca2f31930414e3d92eb69e294e26d5e6a99fe870f79e3e84f"} Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.935744 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5d7g9" event={"ID":"5747d187-87f8-4baa-b0aa-65916db69601","Type":"ContainerStarted","Data":"8fd9dd1c0437f50a9084fe38065e878cde188e0c9b5fb708c30f0cdc4df56daa"} Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.947598 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"593f74ff-3d0a-4bf8-be87-e34fdda1b202","Type":"ContainerStarted","Data":"9282ada21d27b554bdba907f8ac14600a68966fa6795e7e205c78deb368048a6"} Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.948086 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.957982 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.785779115 podStartE2EDuration="9.957957206s" podCreationTimestamp="2026-02-18 19:38:31 +0000 UTC" firstStartedPulling="2026-02-18 19:38:33.9789579 +0000 UTC m=+1216.429370696" lastFinishedPulling="2026-02-18 19:38:39.151135991 +0000 UTC m=+1221.601548787" observedRunningTime="2026-02-18 19:38:40.944068527 +0000 UTC m=+1223.394481323" watchObservedRunningTime="2026-02-18 19:38:40.957957206 +0000 UTC m=+1223.408370002" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.977118 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-b2mr5" podStartSLOduration=4.162171893 podStartE2EDuration="41.977093937s" podCreationTimestamp="2026-02-18 19:37:59 +0000 UTC" firstStartedPulling="2026-02-18 19:38:01.338622982 +0000 UTC m=+1183.789035768" lastFinishedPulling="2026-02-18 19:38:39.153545026 +0000 UTC m=+1221.603957812" observedRunningTime="2026-02-18 19:38:40.963659482 +0000 UTC m=+1223.414072278" watchObservedRunningTime="2026-02-18 19:38:40.977093937 +0000 UTC m=+1223.427506733" Feb 18 19:38:40 crc kubenswrapper[4754]: I0218 19:38:40.991598 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5d7g9" podStartSLOduration=4.292431687 podStartE2EDuration="41.991569254s" podCreationTimestamp="2026-02-18 19:37:59 +0000 UTC" firstStartedPulling="2026-02-18 19:38:01.461523609 +0000 UTC m=+1183.911936405" lastFinishedPulling="2026-02-18 19:38:39.160661176 +0000 UTC m=+1221.611073972" observedRunningTime="2026-02-18 19:38:40.978537501 +0000 UTC m=+1223.428950297" watchObservedRunningTime="2026-02-18 19:38:40.991569254 +0000 UTC m=+1223.441982050" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.003721 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=5.187829227 podStartE2EDuration="10.003697499s" podCreationTimestamp="2026-02-18 19:38:31 +0000 UTC" firstStartedPulling="2026-02-18 19:38:34.368135925 +0000 UTC m=+1216.818548711" lastFinishedPulling="2026-02-18 19:38:39.184004197 +0000 UTC m=+1221.634416983" observedRunningTime="2026-02-18 19:38:41.000554302 +0000 UTC m=+1223.450967098" watchObservedRunningTime="2026-02-18 19:38:41.003697499 +0000 UTC m=+1223.454110295" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.033565 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86445745d9-8xbmt" podStartSLOduration=7.033541281 podStartE2EDuration="7.033541281s" podCreationTimestamp="2026-02-18 19:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:41.032737006 +0000 UTC m=+1223.483149802" watchObservedRunningTime="2026-02-18 19:38:41.033541281 +0000 UTC m=+1223.483954077" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.700417 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.709249 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.748249 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.748324 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.802956 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:38:41 crc kubenswrapper[4754]: I0218 19:38:41.963107 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.020012 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.075361 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.134515 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.138464 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.262896 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc90b97-4fa8-4e28-9c5a-422cdbe50695" path="/var/lib/kubelet/pods/9dc90b97-4fa8-4e28-9c5a-422cdbe50695/volumes" Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.328396 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-fk8xw"] Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.329130 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="dnsmasq-dns" containerID="cri-o://71ffccd940dea48b446679dc068d8116f6fcd39ccb2efee571b88e7d3f37c452" gracePeriod=10 Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.992940 4754 generic.go:334] "Generic (PLEG): container finished" podID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerID="71ffccd940dea48b446679dc068d8116f6fcd39ccb2efee571b88e7d3f37c452" exitCode=0 Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.993079 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" event={"ID":"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c","Type":"ContainerDied","Data":"71ffccd940dea48b446679dc068d8116f6fcd39ccb2efee571b88e7d3f37c452"} Feb 18 19:38:42 crc kubenswrapper[4754]: I0218 19:38:42.993927 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.015775 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.015834 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.053880 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.149059 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.149255 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.150944 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f7766589b-gh94d" podUID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 18 19:38:43 crc kubenswrapper[4754]: I0218 19:38:43.395660 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Feb 18 19:38:45 crc kubenswrapper[4754]: I0218 19:38:45.787997 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:45 crc kubenswrapper[4754]: I0218 19:38:45.790833 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api-log" containerID="cri-o://31a673fedb94d39d14fb873356fe0387209e37df8a6c39f013e7166de6652654" gracePeriod=30 Feb 18 19:38:45 crc kubenswrapper[4754]: I0218 19:38:45.792102 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api" containerID="cri-o://33f5eb8715abf8d336e1a2d97c7c6dda1e28eb23f18683e78b4b2b9992e1c35c" gracePeriod=30 Feb 18 19:38:46 crc kubenswrapper[4754]: I0218 19:38:46.052454 4754 generic.go:334] "Generic (PLEG): container finished" podID="33ee2423-3834-4e72-98f8-7c799d966afa" containerID="31a673fedb94d39d14fb873356fe0387209e37df8a6c39f013e7166de6652654" exitCode=143 Feb 18 19:38:46 crc kubenswrapper[4754]: I0218 19:38:46.052670 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ee2423-3834-4e72-98f8-7c799d966afa","Type":"ContainerDied","Data":"31a673fedb94d39d14fb873356fe0387209e37df8a6c39f013e7166de6652654"} Feb 18 19:38:47 crc kubenswrapper[4754]: I0218 19:38:47.075331 4754 generic.go:334] "Generic (PLEG): container finished" podID="5747d187-87f8-4baa-b0aa-65916db69601" containerID="8fd9dd1c0437f50a9084fe38065e878cde188e0c9b5fb708c30f0cdc4df56daa" exitCode=0 Feb 18 19:38:47 crc kubenswrapper[4754]: I0218 19:38:47.075430 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5d7g9" event={"ID":"5747d187-87f8-4baa-b0aa-65916db69601","Type":"ContainerDied","Data":"8fd9dd1c0437f50a9084fe38065e878cde188e0c9b5fb708c30f0cdc4df56daa"} Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.041649 4754 scope.go:117] "RemoveContainer" containerID="1cb7ce5e8b5646ce0c43541b73f1a6f8d60e21f5d162a034c87fc8d8614cb73d" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.066810 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.096170 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" event={"ID":"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c","Type":"ContainerDied","Data":"19831b62002c20b0d2403135168700bec32db444030dfab2bc7d0f4138280b79"} Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.096218 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19831b62002c20b0d2403135168700bec32db444030dfab2bc7d0f4138280b79" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.101847 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kd677" event={"ID":"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8","Type":"ContainerDied","Data":"1179ddce2632605f488bbeb4ce69ab9d06dd312480491110b65a1365190516b2"} Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.101865 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1179ddce2632605f488bbeb4ce69ab9d06dd312480491110b65a1365190516b2" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.101925 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kd677" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.106731 4754 generic.go:334] "Generic (PLEG): container finished" podID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" containerID="0f823c1d6c10e70ca2f31930414e3d92eb69e294e26d5e6a99fe870f79e3e84f" exitCode=0 Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.106824 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b2mr5" event={"ID":"9a109a6c-ffaa-479e-95e6-ef033aec4b27","Type":"ContainerDied","Data":"0f823c1d6c10e70ca2f31930414e3d92eb69e294e26d5e6a99fe870f79e3e84f"} Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.184441 4754 scope.go:117] "RemoveContainer" containerID="454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.261321 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-config-data\") pod \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.261554 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-scripts\") pod \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.261592 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-fernet-keys\") pod \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.261615 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-credential-keys\") pod \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.261677 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8f9k\" (UniqueName: \"kubernetes.io/projected/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-kube-api-access-j8f9k\") pod \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.261728 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-combined-ca-bundle\") pod \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\" (UID: \"bbd89f97-ca15-432f-9ba7-6ce957c1bfa8\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.264629 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.276594 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-kube-api-access-j8f9k" (OuterVolumeSpecName: "kube-api-access-j8f9k") pod "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" (UID: "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8"). InnerVolumeSpecName "kube-api-access-j8f9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.279655 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" (UID: "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.282268 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-scripts" (OuterVolumeSpecName: "scripts") pod "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" (UID: "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.296557 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" (UID: "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.317267 4754 scope.go:117] "RemoveContainer" containerID="fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.329061 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" (UID: "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.337457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-config-data" (OuterVolumeSpecName: "config-data") pod "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" (UID: "bbd89f97-ca15-432f-9ba7-6ce957c1bfa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.361308 4754 scope.go:117] "RemoveContainer" containerID="454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030" Feb 18 19:38:48 crc kubenswrapper[4754]: E0218 19:38:48.361950 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030\": container with ID starting with 454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030 not found: ID does not exist" containerID="454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.362001 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030"} err="failed to get container status \"454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030\": rpc error: code = NotFound desc = could not find container \"454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030\": container with ID starting with 454a4105022588049a9ae9903edbc635dd93260f7da56c439af3cf8f8d533030 not found: ID does not exist" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.362036 4754 scope.go:117] "RemoveContainer" containerID="fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e" Feb 18 19:38:48 crc kubenswrapper[4754]: E0218 19:38:48.362546 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e\": container with ID starting with fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e not found: ID does not exist" containerID="fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.362591 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e"} err="failed to get container status \"fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e\": rpc error: code = NotFound desc = could not find container \"fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e\": container with ID starting with fbf01c873107b8c37484cc6a9ba7cd535730d4fd7bbf48e888213133302b092e not found: ID does not exist" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.364436 4754 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.364459 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8f9k\" (UniqueName: \"kubernetes.io/projected/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-kube-api-access-j8f9k\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.364471 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.364481 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.364491 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.364502 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.469073 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-nb\") pod \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.469242 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh7fs\" (UniqueName: \"kubernetes.io/projected/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-kube-api-access-hh7fs\") pod \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.471862 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-sb\") pod \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.471914 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-swift-storage-0\") pod \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.471949 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-config\") pod \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.472031 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-svc\") pod \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\" (UID: \"8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.484560 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-kube-api-access-hh7fs" (OuterVolumeSpecName: "kube-api-access-hh7fs") pod "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" (UID: "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c"). InnerVolumeSpecName "kube-api-access-hh7fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.552677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" (UID: "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.562581 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" (UID: "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.570450 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-config" (OuterVolumeSpecName: "config") pod "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" (UID: "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.577890 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh7fs\" (UniqueName: \"kubernetes.io/projected/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-kube-api-access-hh7fs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.577933 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.577945 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.577954 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.582487 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" (UID: "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.587625 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" (UID: "8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.634289 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:38:48 crc kubenswrapper[4754]: W0218 19:38:48.649687 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30c68d8_931c_4aca_a099_c7c969a92b61.slice/crio-a620fefac7b3f7a94ab0074525054a8a7094c2df3b973120511557bcbc62fab5 WatchSource:0}: Error finding container a620fefac7b3f7a94ab0074525054a8a7094c2df3b973120511557bcbc62fab5: Status 404 returned error can't find the container with id a620fefac7b3f7a94ab0074525054a8a7094c2df3b973120511557bcbc62fab5 Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.676937 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.680463 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.680505 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.784801 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-scripts\") pod \"5747d187-87f8-4baa-b0aa-65916db69601\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.785004 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-combined-ca-bundle\") pod \"5747d187-87f8-4baa-b0aa-65916db69601\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.785046 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kb7v\" (UniqueName: \"kubernetes.io/projected/5747d187-87f8-4baa-b0aa-65916db69601-kube-api-access-8kb7v\") pod \"5747d187-87f8-4baa-b0aa-65916db69601\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.785078 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5747d187-87f8-4baa-b0aa-65916db69601-logs\") pod \"5747d187-87f8-4baa-b0aa-65916db69601\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.785176 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-config-data\") pod \"5747d187-87f8-4baa-b0aa-65916db69601\" (UID: \"5747d187-87f8-4baa-b0aa-65916db69601\") " Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.789647 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5747d187-87f8-4baa-b0aa-65916db69601-logs" (OuterVolumeSpecName: "logs") pod "5747d187-87f8-4baa-b0aa-65916db69601" (UID: "5747d187-87f8-4baa-b0aa-65916db69601"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.792532 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5747d187-87f8-4baa-b0aa-65916db69601-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.796303 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5747d187-87f8-4baa-b0aa-65916db69601-kube-api-access-8kb7v" (OuterVolumeSpecName: "kube-api-access-8kb7v") pod "5747d187-87f8-4baa-b0aa-65916db69601" (UID: "5747d187-87f8-4baa-b0aa-65916db69601"). InnerVolumeSpecName "kube-api-access-8kb7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.804565 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-scripts" (OuterVolumeSpecName: "scripts") pod "5747d187-87f8-4baa-b0aa-65916db69601" (UID: "5747d187-87f8-4baa-b0aa-65916db69601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.820594 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.838496 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-config-data" (OuterVolumeSpecName: "config-data") pod "5747d187-87f8-4baa-b0aa-65916db69601" (UID: "5747d187-87f8-4baa-b0aa-65916db69601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.855252 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5747d187-87f8-4baa-b0aa-65916db69601" (UID: "5747d187-87f8-4baa-b0aa-65916db69601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.895874 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.895913 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kb7v\" (UniqueName: \"kubernetes.io/projected/5747d187-87f8-4baa-b0aa-65916db69601-kube-api-access-8kb7v\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.895932 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.895947 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5747d187-87f8-4baa-b0aa-65916db69601-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.987434 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": read tcp 10.217.0.2:36382->10.217.0.163:9322: read: connection reset by peer" Feb 18 19:38:48 crc kubenswrapper[4754]: I0218 19:38:48.987831 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9322/\": read tcp 10.217.0.2:36384->10.217.0.163:9322: read: connection reset by peer" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.161427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"742e0717-1560-424d-b0d3-4e7b46f8ec8c","Type":"ContainerStarted","Data":"11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2"} Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.165046 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e693768a-995f-412b-bede-020e44d17d03","Type":"ContainerStarted","Data":"aa2998bcd8bc7c139b359dc741f6cd89a1e0306a553304d4b348b7c2db6e6122"} Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.166454 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f30c68d8-931c-4aca-a099-c7c969a92b61","Type":"ContainerStarted","Data":"a620fefac7b3f7a94ab0074525054a8a7094c2df3b973120511557bcbc62fab5"} Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.182170 4754 generic.go:334] "Generic (PLEG): container finished" podID="33ee2423-3834-4e72-98f8-7c799d966afa" containerID="33f5eb8715abf8d336e1a2d97c7c6dda1e28eb23f18683e78b4b2b9992e1c35c" exitCode=0 Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.182303 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ee2423-3834-4e72-98f8-7c799d966afa","Type":"ContainerDied","Data":"33f5eb8715abf8d336e1a2d97c7c6dda1e28eb23f18683e78b4b2b9992e1c35c"} Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.249701 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-644bd8cdcb-r8qtx"] Feb 18 19:38:49 crc kubenswrapper[4754]: E0218 19:38:49.250189 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="init" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250206 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="init" Feb 18 19:38:49 crc kubenswrapper[4754]: E0218 19:38:49.250224 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" containerName="keystone-bootstrap" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250230 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" containerName="keystone-bootstrap" Feb 18 19:38:49 crc kubenswrapper[4754]: E0218 19:38:49.250256 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="dnsmasq-dns" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250262 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="dnsmasq-dns" Feb 18 19:38:49 crc kubenswrapper[4754]: E0218 19:38:49.250272 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5747d187-87f8-4baa-b0aa-65916db69601" containerName="placement-db-sync" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250279 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5747d187-87f8-4baa-b0aa-65916db69601" containerName="placement-db-sync" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250490 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5747d187-87f8-4baa-b0aa-65916db69601" containerName="placement-db-sync" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250509 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" containerName="keystone-bootstrap" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.250533 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" containerName="dnsmasq-dns" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.261431 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5d7g9" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.261824 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-fk8xw" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.264095 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5d7g9" event={"ID":"5747d187-87f8-4baa-b0aa-65916db69601","Type":"ContainerDied","Data":"b338c1557451514520d2e9dd6b06236abf1d32e474d2642c371069b6ec11cb00"} Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.264158 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b338c1557451514520d2e9dd6b06236abf1d32e474d2642c371069b6ec11cb00" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.264175 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-644bd8cdcb-r8qtx"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.264262 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.269411 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.269728 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.269883 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gvkx6" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.277037 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.277219 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.277355 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.394285 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8669779bb4-bghj5"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.396453 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.398756 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.406035 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k8m6f" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.406613 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.406902 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.407169 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414632 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmvl2\" (UniqueName: \"kubernetes.io/projected/522d7b0a-243a-4469-b9f1-d6d838827080-kube-api-access-cmvl2\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414682 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-internal-tls-certs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414728 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-public-tls-certs\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414759 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph777\" (UniqueName: \"kubernetes.io/projected/365128a8-105e-4041-a7d8-9e1948b41aef-kube-api-access-ph777\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414779 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-combined-ca-bundle\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414814 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-combined-ca-bundle\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414845 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-public-tls-certs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414883 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-fernet-keys\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414899 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-internal-tls-certs\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414918 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-config-data\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414950 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d7b0a-243a-4469-b9f1-d6d838827080-logs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414974 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-config-data\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.414999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-credential-keys\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.415018 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-scripts\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.415048 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-scripts\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.428560 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8669779bb4-bghj5"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.449298 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-fk8xw"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521017 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-combined-ca-bundle\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521518 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-public-tls-certs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521587 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-internal-tls-certs\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521605 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-fernet-keys\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521622 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-config-data\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521643 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d7b0a-243a-4469-b9f1-d6d838827080-logs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521660 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-config-data\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521691 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-credential-keys\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521711 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-scripts\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521735 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-scripts\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521768 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmvl2\" (UniqueName: \"kubernetes.io/projected/522d7b0a-243a-4469-b9f1-d6d838827080-kube-api-access-cmvl2\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521792 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-internal-tls-certs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.521837 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-public-tls-certs\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.524119 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-fk8xw"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.525244 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph777\" (UniqueName: \"kubernetes.io/projected/365128a8-105e-4041-a7d8-9e1948b41aef-kube-api-access-ph777\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.525278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-combined-ca-bundle\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.526817 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d7b0a-243a-4469-b9f1-d6d838827080-logs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.547187 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-combined-ca-bundle\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.547923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-internal-tls-certs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.548550 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-credential-keys\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.550941 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-config-data\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.551066 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-config-data\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.551280 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-combined-ca-bundle\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.552859 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-internal-tls-certs\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.553798 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.555858 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-fernet-keys\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.555966 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-public-tls-certs\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.556593 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-scripts\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.557902 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-public-tls-certs\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.559607 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365128a8-105e-4041-a7d8-9e1948b41aef-scripts\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.560924 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmvl2\" (UniqueName: \"kubernetes.io/projected/522d7b0a-243a-4469-b9f1-d6d838827080-kube-api-access-cmvl2\") pod \"placement-8669779bb4-bghj5\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.588954 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph777\" (UniqueName: \"kubernetes.io/projected/365128a8-105e-4041-a7d8-9e1948b41aef-kube-api-access-ph777\") pod \"keystone-644bd8cdcb-r8qtx\" (UID: \"365128a8-105e-4041-a7d8-9e1948b41aef\") " pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.633660 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-combined-ca-bundle\") pod \"33ee2423-3834-4e72-98f8-7c799d966afa\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.633785 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-config-data\") pod \"33ee2423-3834-4e72-98f8-7c799d966afa\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.633813 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ee2423-3834-4e72-98f8-7c799d966afa-logs\") pod \"33ee2423-3834-4e72-98f8-7c799d966afa\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.633861 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8wpz\" (UniqueName: \"kubernetes.io/projected/33ee2423-3834-4e72-98f8-7c799d966afa-kube-api-access-v8wpz\") pod \"33ee2423-3834-4e72-98f8-7c799d966afa\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.633898 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-custom-prometheus-ca\") pod \"33ee2423-3834-4e72-98f8-7c799d966afa\" (UID: \"33ee2423-3834-4e72-98f8-7c799d966afa\") " Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.642701 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33ee2423-3834-4e72-98f8-7c799d966afa-logs" (OuterVolumeSpecName: "logs") pod "33ee2423-3834-4e72-98f8-7c799d966afa" (UID: "33ee2423-3834-4e72-98f8-7c799d966afa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.671209 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-cfd5fbcfb-z278z"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.671650 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:49 crc kubenswrapper[4754]: E0218 19:38:49.671736 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.671753 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api" Feb 18 19:38:49 crc kubenswrapper[4754]: E0218 19:38:49.671793 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api-log" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.671801 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api-log" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.671975 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.671987 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" containerName="watcher-api-log" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.677056 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.688073 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ee2423-3834-4e72-98f8-7c799d966afa-kube-api-access-v8wpz" (OuterVolumeSpecName: "kube-api-access-v8wpz") pod "33ee2423-3834-4e72-98f8-7c799d966afa" (UID: "33ee2423-3834-4e72-98f8-7c799d966afa"). InnerVolumeSpecName "kube-api-access-v8wpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.692341 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cfd5fbcfb-z278z"] Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.736035 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33ee2423-3834-4e72-98f8-7c799d966afa-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.736059 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8wpz\" (UniqueName: \"kubernetes.io/projected/33ee2423-3834-4e72-98f8-7c799d966afa-kube-api-access-v8wpz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.775819 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "33ee2423-3834-4e72-98f8-7c799d966afa" (UID: "33ee2423-3834-4e72-98f8-7c799d966afa"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.792319 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33ee2423-3834-4e72-98f8-7c799d966afa" (UID: "33ee2423-3834-4e72-98f8-7c799d966afa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.794321 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-config-data" (OuterVolumeSpecName: "config-data") pod "33ee2423-3834-4e72-98f8-7c799d966afa" (UID: "33ee2423-3834-4e72-98f8-7c799d966afa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.826685 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838576 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-public-tls-certs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838638 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-combined-ca-bundle\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838675 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq26l\" (UniqueName: \"kubernetes.io/projected/1a6fa563-05b7-4d18-839c-19055119022e-kube-api-access-hq26l\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838716 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-config-data\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838786 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a6fa563-05b7-4d18-839c-19055119022e-logs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838805 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-scripts\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838842 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-internal-tls-certs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838909 4754 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838930 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.838944 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ee2423-3834-4e72-98f8-7c799d966afa-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.924531 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940264 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a6fa563-05b7-4d18-839c-19055119022e-logs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940670 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-scripts\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940718 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-internal-tls-certs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940786 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-public-tls-certs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940819 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-combined-ca-bundle\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940843 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq26l\" (UniqueName: \"kubernetes.io/projected/1a6fa563-05b7-4d18-839c-19055119022e-kube-api-access-hq26l\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.940882 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-config-data\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.943486 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a6fa563-05b7-4d18-839c-19055119022e-logs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.947999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-internal-tls-certs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.948229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-config-data\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.953292 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-scripts\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.953807 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-combined-ca-bundle\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.976015 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a6fa563-05b7-4d18-839c-19055119022e-public-tls-certs\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:49 crc kubenswrapper[4754]: I0218 19:38:49.998946 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq26l\" (UniqueName: \"kubernetes.io/projected/1a6fa563-05b7-4d18-839c-19055119022e-kube-api-access-hq26l\") pod \"placement-cfd5fbcfb-z278z\" (UID: \"1a6fa563-05b7-4d18-839c-19055119022e\") " pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.046533 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt2b5\" (UniqueName: \"kubernetes.io/projected/9a109a6c-ffaa-479e-95e6-ef033aec4b27-kube-api-access-lt2b5\") pod \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.046685 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-db-sync-config-data\") pod \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.046805 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-combined-ca-bundle\") pod \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\" (UID: \"9a109a6c-ffaa-479e-95e6-ef033aec4b27\") " Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.053820 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.068498 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9a109a6c-ffaa-479e-95e6-ef033aec4b27" (UID: "9a109a6c-ffaa-479e-95e6-ef033aec4b27"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.093877 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a109a6c-ffaa-479e-95e6-ef033aec4b27-kube-api-access-lt2b5" (OuterVolumeSpecName: "kube-api-access-lt2b5") pod "9a109a6c-ffaa-479e-95e6-ef033aec4b27" (UID: "9a109a6c-ffaa-479e-95e6-ef033aec4b27"). InnerVolumeSpecName "kube-api-access-lt2b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.148848 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.148886 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt2b5\" (UniqueName: \"kubernetes.io/projected/9a109a6c-ffaa-479e-95e6-ef033aec4b27-kube-api-access-lt2b5\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.155927 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a109a6c-ffaa-479e-95e6-ef033aec4b27" (UID: "9a109a6c-ffaa-479e-95e6-ef033aec4b27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.270207 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c" path="/var/lib/kubelet/pods/8b107ba5-e1b6-4bb5-aaa9-48f20858ff1c/volumes" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.280187 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a109a6c-ffaa-479e-95e6-ef033aec4b27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.421804 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"33ee2423-3834-4e72-98f8-7c799d966afa","Type":"ContainerDied","Data":"03ea4162a8d2afad00d31cff64f23ffc11d56b30e3ded30ec7418aec3bb4b44d"} Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.421888 4754 scope.go:117] "RemoveContainer" containerID="33f5eb8715abf8d336e1a2d97c7c6dda1e28eb23f18683e78b4b2b9992e1c35c" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.422201 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.428075 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79xk6" event={"ID":"fc061809-61de-4d52-909b-e2d4957dc4a4","Type":"ContainerStarted","Data":"0e30b92ebed4bb6fd05b7a710c9994701993088f47724c296dbc81d9da47cefe"} Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.507380 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d7b8787dc-5z4vp"] Feb 18 19:38:50 crc kubenswrapper[4754]: E0218 19:38:50.508227 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" containerName="barbican-db-sync" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.508318 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" containerName="barbican-db-sync" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.508645 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" containerName="barbican-db-sync" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.509865 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.510676 4754 scope.go:117] "RemoveContainer" containerID="31a673fedb94d39d14fb873356fe0387209e37df8a6c39f013e7166de6652654" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.520707 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.521930 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e693768a-995f-412b-bede-020e44d17d03","Type":"ContainerStarted","Data":"e2a3c2b5d2fcfdf790e283b783371fee59decad7964454a6bccb17e361e0e1ba"} Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.569527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-b2mr5" event={"ID":"9a109a6c-ffaa-479e-95e6-ef033aec4b27","Type":"ContainerDied","Data":"e00e789de12be741424b73fb6edd0ea814dbd606d9bd18f3ba7a2c975c21bc58"} Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.569579 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00e789de12be741424b73fb6edd0ea814dbd606d9bd18f3ba7a2c975c21bc58" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.569663 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-b2mr5" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.588520 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d7b8787dc-5z4vp"] Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.613963 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-config-data\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.614016 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-logs\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.614073 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-config-data-custom\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.614122 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-combined-ca-bundle\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.614209 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksfnd\" (UniqueName: \"kubernetes.io/projected/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-kube-api-access-ksfnd\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.641971 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-79xk6" podStartSLOduration=4.806187418 podStartE2EDuration="51.641941381s" podCreationTimestamp="2026-02-18 19:37:59 +0000 UTC" firstStartedPulling="2026-02-18 19:38:01.306296152 +0000 UTC m=+1183.756708948" lastFinishedPulling="2026-02-18 19:38:48.142050115 +0000 UTC m=+1230.592462911" observedRunningTime="2026-02-18 19:38:50.547210388 +0000 UTC m=+1232.997623184" watchObservedRunningTime="2026-02-18 19:38:50.641941381 +0000 UTC m=+1233.092354177" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.660448 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f30c68d8-931c-4aca-a099-c7c969a92b61","Type":"ContainerStarted","Data":"78930d0332c4e8c6b510e6585a390d7c76c5955b44bd1a2e3bba25ed3f96d339"} Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.720328 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksfnd\" (UniqueName: \"kubernetes.io/projected/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-kube-api-access-ksfnd\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.720815 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-config-data\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.720837 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-logs\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.720874 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-config-data-custom\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.720894 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-combined-ca-bundle\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.725726 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-logs\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.733272 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6bc9bfd988-dzqsz"] Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.734938 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-combined-ca-bundle\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.735642 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.746917 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-config-data-custom\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.747374 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.749438 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-config-data\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.758585 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksfnd\" (UniqueName: \"kubernetes.io/projected/54fc8c71-d09d-4a90-86f8-7ca706bfb85f-kube-api-access-ksfnd\") pod \"barbican-worker-d7b8787dc-5z4vp\" (UID: \"54fc8c71-d09d-4a90-86f8-7ca706bfb85f\") " pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.826808 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-logs\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.826884 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-config-data-custom\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.826912 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-config-data\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.826931 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkr4j\" (UniqueName: \"kubernetes.io/projected/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-kube-api-access-xkr4j\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.826967 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-combined-ca-bundle\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.840300 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:50 crc kubenswrapper[4754]: W0218 19:38:50.848783 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod365128a8_105e_4041_a7d8_9e1948b41aef.slice/crio-ad360edc615c6bd61ea11847620356b5b0598d0c375e18c63fa226219354d196 WatchSource:0}: Error finding container ad360edc615c6bd61ea11847620356b5b0598d0c375e18c63fa226219354d196: Status 404 returned error can't find the container with id ad360edc615c6bd61ea11847620356b5b0598d0c375e18c63fa226219354d196 Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.871899 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d7b8787dc-5z4vp" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.926280 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.928977 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-logs\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.929180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-config-data-custom\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.929260 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-config-data\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.929364 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkr4j\" (UniqueName: \"kubernetes.io/projected/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-kube-api-access-xkr4j\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.929476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-combined-ca-bundle\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.929839 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-logs\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.940336 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-combined-ca-bundle\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.951577 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-config-data\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.952223 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-config-data-custom\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.968912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkr4j\" (UniqueName: \"kubernetes.io/projected/c9f2e513-5b5e-4c83-b2b6-8a12216cc926-kube-api-access-xkr4j\") pod \"barbican-keystone-listener-6bc9bfd988-dzqsz\" (UID: \"c9f2e513-5b5e-4c83-b2b6-8a12216cc926\") " pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:50 crc kubenswrapper[4754]: I0218 19:38:50.969914 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-644bd8cdcb-r8qtx"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.016894 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bc9bfd988-dzqsz"] Feb 18 19:38:51 crc kubenswrapper[4754]: W0218 19:38:51.069391 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod522d7b0a_243a_4469_b9f1_d6d838827080.slice/crio-7f33defc14e6c90e6984cdd94862896e8937f4c6b4b65489d3aef6947c80c6f0 WatchSource:0}: Error finding container 7f33defc14e6c90e6984cdd94862896e8937f4c6b4b65489d3aef6947c80c6f0: Status 404 returned error can't find the container with id 7f33defc14e6c90e6984cdd94862896e8937f4c6b4b65489d3aef6947c80c6f0 Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.096833 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.108743 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.111347 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.140800 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.141242 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.163784 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.231561 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.255841 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-config-data\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.256079 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.256289 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.256331 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.256368 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.256403 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-logs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.256520 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ln2n\" (UniqueName: \"kubernetes.io/projected/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-kube-api-access-4ln2n\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.330904 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bd9t"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.333085 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.344764 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bd9t"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369092 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ln2n\" (UniqueName: \"kubernetes.io/projected/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-kube-api-access-4ln2n\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369200 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-config-data\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369368 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369394 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369415 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369432 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-logs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.369950 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-logs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.378103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.381583 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-config-data\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.382297 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.383486 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.388588 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.399549 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ln2n\" (UniqueName: \"kubernetes.io/projected/b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8-kube-api-access-4ln2n\") pod \"watcher-api-0\" (UID: \"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8\") " pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.409454 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8669779bb4-bghj5"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.421881 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cd4746946-gww6b"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.423862 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.432930 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.462619 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cd4746946-gww6b"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.470873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-config\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.470951 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.470979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf79q\" (UniqueName: \"kubernetes.io/projected/bb038046-b50b-427e-8c6e-8106009fea7d-kube-api-access-gf79q\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.471014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-combined-ca-bundle\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.471038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.471068 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data-custom\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.471088 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.471110 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.472853 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75cdt\" (UniqueName: \"kubernetes.io/projected/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-kube-api-access-75cdt\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.472911 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-logs\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.473156 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.490640 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-cfd5fbcfb-z278z"] Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576272 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576366 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-config\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576399 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576416 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf79q\" (UniqueName: \"kubernetes.io/projected/bb038046-b50b-427e-8c6e-8106009fea7d-kube-api-access-gf79q\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-combined-ca-bundle\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576454 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576473 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data-custom\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576492 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576512 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576536 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75cdt\" (UniqueName: \"kubernetes.io/projected/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-kube-api-access-75cdt\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576556 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-logs\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.576942 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-logs\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.577131 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.578265 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.578797 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-config\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.579443 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.584005 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.584902 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.587795 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-combined-ca-bundle\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.617497 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf79q\" (UniqueName: \"kubernetes.io/projected/bb038046-b50b-427e-8c6e-8106009fea7d-kube-api-access-gf79q\") pod \"dnsmasq-dns-848cf88cfc-6bd9t\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.618794 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.619672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75cdt\" (UniqueName: \"kubernetes.io/projected/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-kube-api-access-75cdt\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.620590 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data-custom\") pod \"barbican-api-7cd4746946-gww6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.690163 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.692954 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-644bd8cdcb-r8qtx" event={"ID":"365128a8-105e-4041-a7d8-9e1948b41aef","Type":"ContainerStarted","Data":"ad360edc615c6bd61ea11847620356b5b0598d0c375e18c63fa226219354d196"} Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.716942 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cfd5fbcfb-z278z" event={"ID":"1a6fa563-05b7-4d18-839c-19055119022e","Type":"ContainerStarted","Data":"d4595e9108b4508472d227f096a1f59dc3e5cd0bdf5ae5e07a1c7eb700ba63d9"} Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.730061 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8669779bb4-bghj5" event={"ID":"522d7b0a-243a-4469-b9f1-d6d838827080","Type":"ContainerStarted","Data":"7f33defc14e6c90e6984cdd94862896e8937f4c6b4b65489d3aef6947c80c6f0"} Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.787765 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:51 crc kubenswrapper[4754]: I0218 19:38:51.959276 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d7b8787dc-5z4vp"] Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.203859 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6bc9bfd988-dzqsz"] Feb 18 19:38:52 crc kubenswrapper[4754]: W0218 19:38:52.260598 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9f2e513_5b5e_4c83_b2b6_8a12216cc926.slice/crio-6a0ce4395a6bcb6d9674ae89fed918ce35fa2af8f44f93640881bee945f3475d WatchSource:0}: Error finding container 6a0ce4395a6bcb6d9674ae89fed918ce35fa2af8f44f93640881bee945f3475d: Status 404 returned error can't find the container with id 6a0ce4395a6bcb6d9674ae89fed918ce35fa2af8f44f93640881bee945f3475d Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.318174 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ee2423-3834-4e72-98f8-7c799d966afa" path="/var/lib/kubelet/pods/33ee2423-3834-4e72-98f8-7c799d966afa/volumes" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.508762 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.729126 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cd4746946-gww6b"] Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.749831 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bd9t"] Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.753583 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" event={"ID":"c9f2e513-5b5e-4c83-b2b6-8a12216cc926","Type":"ContainerStarted","Data":"6a0ce4395a6bcb6d9674ae89fed918ce35fa2af8f44f93640881bee945f3475d"} Feb 18 19:38:52 crc kubenswrapper[4754]: W0218 19:38:52.757543 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod624cdacb_75c4_4a8b_86f8_8f0d451b6c6b.slice/crio-17a165c11c9acffb076945092b00aae2c89885d1bb0403862dee339f2c9a9d69 WatchSource:0}: Error finding container 17a165c11c9acffb076945092b00aae2c89885d1bb0403862dee339f2c9a9d69: Status 404 returned error can't find the container with id 17a165c11c9acffb076945092b00aae2c89885d1bb0403862dee339f2c9a9d69 Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.763246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-644bd8cdcb-r8qtx" event={"ID":"365128a8-105e-4041-a7d8-9e1948b41aef","Type":"ContainerStarted","Data":"03c1ba98d22a8c1f53f4fc015c91777aea170f282acd4c41d462a9c474ec141a"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.765030 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.790515 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cfd5fbcfb-z278z" event={"ID":"1a6fa563-05b7-4d18-839c-19055119022e","Type":"ContainerStarted","Data":"7187049c5c2a6867db27d3059d4fb45a385b8121d1d07576b2f54c23f77dc39a"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.790571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-cfd5fbcfb-z278z" event={"ID":"1a6fa563-05b7-4d18-839c-19055119022e","Type":"ContainerStarted","Data":"91404bc6c593ed14e7e14e9d97be8e416e396214bfad9a4ff5d5d90e22e1d482"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.791946 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.791978 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.813599 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-644bd8cdcb-r8qtx" podStartSLOduration=3.813572231 podStartE2EDuration="3.813572231s" podCreationTimestamp="2026-02-18 19:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:52.789329686 +0000 UTC m=+1235.239742502" watchObservedRunningTime="2026-02-18 19:38:52.813572231 +0000 UTC m=+1235.263985027" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.847854 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f30c68d8-931c-4aca-a099-c7c969a92b61","Type":"ContainerStarted","Data":"9eec0a4f901921f692b3e6dbe60ef1b6a448c270eced53b342cf297ad53381d4"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.853156 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-cfd5fbcfb-z278z" podStartSLOduration=3.853110477 podStartE2EDuration="3.853110477s" podCreationTimestamp="2026-02-18 19:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:52.839838059 +0000 UTC m=+1235.290250855" watchObservedRunningTime="2026-02-18 19:38:52.853110477 +0000 UTC m=+1235.303523273" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.893975 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.893951923 podStartE2EDuration="12.893951923s" podCreationTimestamp="2026-02-18 19:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:52.892967213 +0000 UTC m=+1235.343380009" watchObservedRunningTime="2026-02-18 19:38:52.893951923 +0000 UTC m=+1235.344364719" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.906722 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8","Type":"ContainerStarted","Data":"271bb8ee977485012527cba51c1dab31dbc07634eec9d902b11d3eb421a260db"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.915296 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d7b8787dc-5z4vp" event={"ID":"54fc8c71-d09d-4a90-86f8-7ca706bfb85f","Type":"ContainerStarted","Data":"92972e38da7c090040cddc01af24f3d4afd0d583cc4baaefea8043a7491bb9c8"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.939967 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e693768a-995f-412b-bede-020e44d17d03","Type":"ContainerStarted","Data":"4274789014a180f7b040256175cf68d15c60e81fafb8d63ab7f9c66b59be702b"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.966452 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.966428742 podStartE2EDuration="12.966428742s" podCreationTimestamp="2026-02-18 19:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:52.965838235 +0000 UTC m=+1235.416251031" watchObservedRunningTime="2026-02-18 19:38:52.966428742 +0000 UTC m=+1235.416841538" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.989929 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8669779bb4-bghj5" event={"ID":"522d7b0a-243a-4469-b9f1-d6d838827080","Type":"ContainerStarted","Data":"30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83"} Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.990996 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:52 crc kubenswrapper[4754]: I0218 19:38:52.991057 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:38:53 crc kubenswrapper[4754]: I0218 19:38:53.031929 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8669779bb4-bghj5" podStartSLOduration=4.031908026 podStartE2EDuration="4.031908026s" podCreationTimestamp="2026-02-18 19:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:53.021657721 +0000 UTC m=+1235.472070517" watchObservedRunningTime="2026-02-18 19:38:53.031908026 +0000 UTC m=+1235.482320822" Feb 18 19:38:53 crc kubenswrapper[4754]: I0218 19:38:53.039023 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 18 19:38:53 crc kubenswrapper[4754]: I0218 19:38:53.153035 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f7766589b-gh94d" podUID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.070452 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8","Type":"ContainerStarted","Data":"eba611e795afe8b2a535ef4f0dc5e12ed494b9771b19814c42647742857a2e84"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.070501 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0f4d3c5-72e7-411b-a2b0-cd3e3771eab8","Type":"ContainerStarted","Data":"6483ea93f4c936262f66687caa72ec387b8b3990b8686a3c39ceb95634599025"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.071186 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.086610 4754 generic.go:334] "Generic (PLEG): container finished" podID="bb038046-b50b-427e-8c6e-8106009fea7d" containerID="c0cd4e7f2ac47f206a8a3eb9a8c38372f30ab69d4a287cd1bfa526fa4c5cb9e9" exitCode=0 Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.086739 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" event={"ID":"bb038046-b50b-427e-8c6e-8106009fea7d","Type":"ContainerDied","Data":"c0cd4e7f2ac47f206a8a3eb9a8c38372f30ab69d4a287cd1bfa526fa4c5cb9e9"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.086770 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" event={"ID":"bb038046-b50b-427e-8c6e-8106009fea7d","Type":"ContainerStarted","Data":"090b0d71e9c85ba5fba5342cdece9b5a083abd580161bd04dce77b6c0f8aa57d"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.102709 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8669779bb4-bghj5" event={"ID":"522d7b0a-243a-4469-b9f1-d6d838827080","Type":"ContainerStarted","Data":"76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.121848 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.121820428 podStartE2EDuration="4.121820428s" podCreationTimestamp="2026-02-18 19:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:54.094221989 +0000 UTC m=+1236.544634775" watchObservedRunningTime="2026-02-18 19:38:54.121820428 +0000 UTC m=+1236.572233224" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.152865 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cd4746946-gww6b" event={"ID":"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b","Type":"ContainerStarted","Data":"1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.152924 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cd4746946-gww6b" event={"ID":"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b","Type":"ContainerStarted","Data":"9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.152935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cd4746946-gww6b" event={"ID":"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b","Type":"ContainerStarted","Data":"17a165c11c9acffb076945092b00aae2c89885d1bb0403862dee339f2c9a9d69"} Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.164033 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.164219 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.226757 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cd4746946-gww6b" podStartSLOduration=3.226735984 podStartE2EDuration="3.226735984s" podCreationTimestamp="2026-02-18 19:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:54.201821108 +0000 UTC m=+1236.652233904" watchObservedRunningTime="2026-02-18 19:38:54.226735984 +0000 UTC m=+1236.677148780" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.789230 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57cff76b44-jxsl5"] Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.799126 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.802999 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.803283 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.821945 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57cff76b44-jxsl5"] Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.913837 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8dj2\" (UniqueName: \"kubernetes.io/projected/3b7fa87f-3faa-4606-a1af-8983f692ff4e-kube-api-access-g8dj2\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.913895 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-combined-ca-bundle\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.913915 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-internal-tls-certs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.913946 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-public-tls-certs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.913964 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-config-data-custom\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.913979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-config-data\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:54 crc kubenswrapper[4754]: I0218 19:38:54.914090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7fa87f-3faa-4606-a1af-8983f692ff4e-logs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.017996 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8dj2\" (UniqueName: \"kubernetes.io/projected/3b7fa87f-3faa-4606-a1af-8983f692ff4e-kube-api-access-g8dj2\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.018090 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-combined-ca-bundle\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.018129 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-internal-tls-certs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.018195 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-public-tls-certs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.018231 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-config-data-custom\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.018257 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-config-data\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.018466 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7fa87f-3faa-4606-a1af-8983f692ff4e-logs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.020623 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7fa87f-3faa-4606-a1af-8983f692ff4e-logs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.024174 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-combined-ca-bundle\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.024870 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-config-data\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.025072 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-public-tls-certs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.028166 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-config-data-custom\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.029813 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7fa87f-3faa-4606-a1af-8983f692ff4e-internal-tls-certs\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.041877 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8dj2\" (UniqueName: \"kubernetes.io/projected/3b7fa87f-3faa-4606-a1af-8983f692ff4e-kube-api-access-g8dj2\") pod \"barbican-api-57cff76b44-jxsl5\" (UID: \"3b7fa87f-3faa-4606-a1af-8983f692ff4e\") " pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.132450 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.185381 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" event={"ID":"bb038046-b50b-427e-8c6e-8106009fea7d","Type":"ContainerStarted","Data":"7b3057f07dfc37f8114e546b1215f6506765a7cfd37e3ebdd345974f4b685507"} Feb 18 19:38:55 crc kubenswrapper[4754]: I0218 19:38:55.215506 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" podStartSLOduration=5.215482504 podStartE2EDuration="5.215482504s" podCreationTimestamp="2026-02-18 19:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:55.205705364 +0000 UTC m=+1237.656118160" watchObservedRunningTime="2026-02-18 19:38:55.215482504 +0000 UTC m=+1237.665895300" Feb 18 19:38:56 crc kubenswrapper[4754]: I0218 19:38:56.247996 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:38:56 crc kubenswrapper[4754]: I0218 19:38:56.587250 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:38:56 crc kubenswrapper[4754]: I0218 19:38:56.587612 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:38:56 crc kubenswrapper[4754]: I0218 19:38:56.745219 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57cff76b44-jxsl5"] Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.236806 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.259922 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" event={"ID":"c9f2e513-5b5e-4c83-b2b6-8a12216cc926","Type":"ContainerStarted","Data":"a06536f9eb3c4e2f807b4c990682cd8f493c5aff69ae5b5546d72954d405682e"} Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.261110 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" event={"ID":"c9f2e513-5b5e-4c83-b2b6-8a12216cc926","Type":"ContainerStarted","Data":"0d0b3ef195ef9ebb1ed24f3efedef955d4fc03371bbf4ad7d1537598500a1361"} Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.265630 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d7b8787dc-5z4vp" event={"ID":"54fc8c71-d09d-4a90-86f8-7ca706bfb85f","Type":"ContainerStarted","Data":"349faaccdce3da57ee972b30044cf582d3edb03f6c41563c1fcdb0a31879ccd1"} Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.265679 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d7b8787dc-5z4vp" event={"ID":"54fc8c71-d09d-4a90-86f8-7ca706bfb85f","Type":"ContainerStarted","Data":"02771f0b42ddeba01d23675c6d38987896e81cdebb96dd96cb8f0492505770de"} Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.275827 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cff76b44-jxsl5" event={"ID":"3b7fa87f-3faa-4606-a1af-8983f692ff4e","Type":"ContainerStarted","Data":"ac657e299aac66e10f95d3a55f4f7d9a39bf62c762d08a3cc9360a4c69b8ef94"} Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.275904 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cff76b44-jxsl5" event={"ID":"3b7fa87f-3faa-4606-a1af-8983f692ff4e","Type":"ContainerStarted","Data":"dc6cdcbf190cef309f3f359ca8a02a00aa9392ea9534d4b5ae590a093db3b449"} Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.298997 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6bc9bfd988-dzqsz" podStartSLOduration=3.4018707839999998 podStartE2EDuration="7.298973673s" podCreationTimestamp="2026-02-18 19:38:50 +0000 UTC" firstStartedPulling="2026-02-18 19:38:52.292254227 +0000 UTC m=+1234.742667023" lastFinishedPulling="2026-02-18 19:38:56.189357116 +0000 UTC m=+1238.639769912" observedRunningTime="2026-02-18 19:38:57.284617622 +0000 UTC m=+1239.735030428" watchObservedRunningTime="2026-02-18 19:38:57.298973673 +0000 UTC m=+1239.749386469" Feb 18 19:38:57 crc kubenswrapper[4754]: I0218 19:38:57.319756 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d7b8787dc-5z4vp" podStartSLOduration=3.097250305 podStartE2EDuration="7.319738281s" podCreationTimestamp="2026-02-18 19:38:50 +0000 UTC" firstStartedPulling="2026-02-18 19:38:51.977107255 +0000 UTC m=+1234.427520051" lastFinishedPulling="2026-02-18 19:38:56.199595231 +0000 UTC m=+1238.650008027" observedRunningTime="2026-02-18 19:38:57.310483757 +0000 UTC m=+1239.760896543" watchObservedRunningTime="2026-02-18 19:38:57.319738281 +0000 UTC m=+1239.770151077" Feb 18 19:38:58 crc kubenswrapper[4754]: I0218 19:38:58.303261 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cff76b44-jxsl5" event={"ID":"3b7fa87f-3faa-4606-a1af-8983f692ff4e","Type":"ContainerStarted","Data":"e8abcf6788e6fb8f67c5a270ef8d65f9ce523402f0826fafcd6154453ccad588"} Feb 18 19:38:58 crc kubenswrapper[4754]: I0218 19:38:58.303674 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:58 crc kubenswrapper[4754]: I0218 19:38:58.303709 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:38:58 crc kubenswrapper[4754]: I0218 19:38:58.332109 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57cff76b44-jxsl5" podStartSLOduration=4.332083367 podStartE2EDuration="4.332083367s" podCreationTimestamp="2026-02-18 19:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:58.328745475 +0000 UTC m=+1240.779158281" watchObservedRunningTime="2026-02-18 19:38:58.332083367 +0000 UTC m=+1240.782496163" Feb 18 19:38:59 crc kubenswrapper[4754]: I0218 19:38:59.318776 4754 generic.go:334] "Generic (PLEG): container finished" podID="fc061809-61de-4d52-909b-e2d4957dc4a4" containerID="0e30b92ebed4bb6fd05b7a710c9994701993088f47724c296dbc81d9da47cefe" exitCode=0 Feb 18 19:38:59 crc kubenswrapper[4754]: I0218 19:38:59.318869 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79xk6" event={"ID":"fc061809-61de-4d52-909b-e2d4957dc4a4","Type":"ContainerDied","Data":"0e30b92ebed4bb6fd05b7a710c9994701993088f47724c296dbc81d9da47cefe"} Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.509128 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.509533 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.557639 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.558614 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.854549 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.856103 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.912986 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:39:00 crc kubenswrapper[4754]: I0218 19:39:00.953939 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.365849 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.366348 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.366440 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.366502 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.582977 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.608844 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.696574 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.764478 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jlqj8"] Feb 18 19:39:01 crc kubenswrapper[4754]: I0218 19:39:01.764781 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="dnsmasq-dns" containerID="cri-o://b53c6d0af1d2f8757741b56793f8d09287a144ddf30d6d4436ad6a9111373ad8" gracePeriod=10 Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.114543 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.123204 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.387247 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86445745d9-8xbmt"] Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.387903 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86445745d9-8xbmt" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-api" containerID="cri-o://707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee" gracePeriod=30 Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.389024 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86445745d9-8xbmt" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-httpd" containerID="cri-o://57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213" gracePeriod=30 Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.389345 4754 generic.go:334] "Generic (PLEG): container finished" podID="272c8937-1ebd-44f6-8514-030c1be0af24" containerID="b53c6d0af1d2f8757741b56793f8d09287a144ddf30d6d4436ad6a9111373ad8" exitCode=0 Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.389828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" event={"ID":"272c8937-1ebd-44f6-8514-030c1be0af24","Type":"ContainerDied","Data":"b53c6d0af1d2f8757741b56793f8d09287a144ddf30d6d4436ad6a9111373ad8"} Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.430803 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c78c979c7-gflt2"] Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.433018 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.466780 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c78c979c7-gflt2"] Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.474640 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.642513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bfmf\" (UniqueName: \"kubernetes.io/projected/007a5628-fc57-4566-9ed0-35df973f14ab-kube-api-access-2bfmf\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.642613 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-internal-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.642832 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-combined-ca-bundle\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.642964 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-ovndb-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.643012 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-config\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.643124 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-httpd-config\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.643245 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-public-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.747061 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-httpd-config\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.747198 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-public-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.747470 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bfmf\" (UniqueName: \"kubernetes.io/projected/007a5628-fc57-4566-9ed0-35df973f14ab-kube-api-access-2bfmf\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.747501 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-internal-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.749181 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-combined-ca-bundle\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.749293 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-ovndb-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.749342 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-config\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.760884 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-combined-ca-bundle\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.761550 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-ovndb-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.762917 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-public-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.763360 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-httpd-config\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.764618 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-config\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.766183 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-86445745d9-8xbmt" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": read tcp 10.217.0.2:59828->10.217.0.168:9696: read: connection reset by peer" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.766243 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/007a5628-fc57-4566-9ed0-35df973f14ab-internal-tls-certs\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.771817 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bfmf\" (UniqueName: \"kubernetes.io/projected/007a5628-fc57-4566-9ed0-35df973f14ab-kube-api-access-2bfmf\") pod \"neutron-6c78c979c7-gflt2\" (UID: \"007a5628-fc57-4566-9ed0-35df973f14ab\") " pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:02 crc kubenswrapper[4754]: I0218 19:39:02.781819 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.015630 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.149976 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f7766589b-gh94d" podUID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.150090 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.151221 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7ad6d935c31c4a41096116f9831315cd554226731e874722795ee996818ccd68"} pod="openstack/horizon-5f7766589b-gh94d" containerMessage="Container horizon failed startup probe, will be restarted" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.151307 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f7766589b-gh94d" podUID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerName="horizon" containerID="cri-o://7ad6d935c31c4a41096116f9831315cd554226731e874722795ee996818ccd68" gracePeriod=30 Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.416721 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-79xk6" event={"ID":"fc061809-61de-4d52-909b-e2d4957dc4a4","Type":"ContainerDied","Data":"b0ce13457280e82b4c7cbb09e22a8cf9d4df5b01d368cd39de5bee6421babd96"} Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.416768 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ce13457280e82b4c7cbb09e22a8cf9d4df5b01d368cd39de5bee6421babd96" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.462784 4754 generic.go:334] "Generic (PLEG): container finished" podID="5d382047-c43a-4f82-8982-106e10d65430" containerID="57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213" exitCode=0 Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.462933 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.462944 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.463948 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445745d9-8xbmt" event={"ID":"5d382047-c43a-4f82-8982-106e10d65430","Type":"ContainerDied","Data":"57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213"} Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.464455 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.464474 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.557794 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79xk6" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.672938 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-db-sync-config-data\") pod \"fc061809-61de-4d52-909b-e2d4957dc4a4\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.673094 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwmnw\" (UniqueName: \"kubernetes.io/projected/fc061809-61de-4d52-909b-e2d4957dc4a4-kube-api-access-vwmnw\") pod \"fc061809-61de-4d52-909b-e2d4957dc4a4\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.673119 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-config-data\") pod \"fc061809-61de-4d52-909b-e2d4957dc4a4\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.673166 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-scripts\") pod \"fc061809-61de-4d52-909b-e2d4957dc4a4\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.673205 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc061809-61de-4d52-909b-e2d4957dc4a4-etc-machine-id\") pod \"fc061809-61de-4d52-909b-e2d4957dc4a4\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.673239 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-combined-ca-bundle\") pod \"fc061809-61de-4d52-909b-e2d4957dc4a4\" (UID: \"fc061809-61de-4d52-909b-e2d4957dc4a4\") " Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.674390 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc061809-61de-4d52-909b-e2d4957dc4a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fc061809-61de-4d52-909b-e2d4957dc4a4" (UID: "fc061809-61de-4d52-909b-e2d4957dc4a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.685296 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fc061809-61de-4d52-909b-e2d4957dc4a4" (UID: "fc061809-61de-4d52-909b-e2d4957dc4a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.692357 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-scripts" (OuterVolumeSpecName: "scripts") pod "fc061809-61de-4d52-909b-e2d4957dc4a4" (UID: "fc061809-61de-4d52-909b-e2d4957dc4a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.705131 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc061809-61de-4d52-909b-e2d4957dc4a4-kube-api-access-vwmnw" (OuterVolumeSpecName: "kube-api-access-vwmnw") pod "fc061809-61de-4d52-909b-e2d4957dc4a4" (UID: "fc061809-61de-4d52-909b-e2d4957dc4a4"). InnerVolumeSpecName "kube-api-access-vwmnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.754315 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc061809-61de-4d52-909b-e2d4957dc4a4" (UID: "fc061809-61de-4d52-909b-e2d4957dc4a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.775581 4754 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.775621 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwmnw\" (UniqueName: \"kubernetes.io/projected/fc061809-61de-4d52-909b-e2d4957dc4a4-kube-api-access-vwmnw\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.775636 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.775645 4754 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc061809-61de-4d52-909b-e2d4957dc4a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.775656 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.893349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-config-data" (OuterVolumeSpecName: "config-data") pod "fc061809-61de-4d52-909b-e2d4957dc4a4" (UID: "fc061809-61de-4d52-909b-e2d4957dc4a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:03 crc kubenswrapper[4754]: I0218 19:39:03.981508 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc061809-61de-4d52-909b-e2d4957dc4a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.472823 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-79xk6" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.720438 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-86445745d9-8xbmt" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": dial tcp 10.217.0.168:9696: connect: connection refused" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.799409 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.922197 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.942213 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:04 crc kubenswrapper[4754]: E0218 19:39:04.942796 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc061809-61de-4d52-909b-e2d4957dc4a4" containerName="cinder-db-sync" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.942812 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc061809-61de-4d52-909b-e2d4957dc4a4" containerName="cinder-db-sync" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.943030 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc061809-61de-4d52-909b-e2d4957dc4a4" containerName="cinder-db-sync" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.944158 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.954744 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.955218 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.955328 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4mfk7" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.955480 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:39:04 crc kubenswrapper[4754]: I0218 19:39:04.971399 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.072405 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wbqqj"] Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.074842 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.089188 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wbqqj"] Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.105639 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.105721 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-scripts\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.105792 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.105848 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qzrd\" (UniqueName: \"kubernetes.io/projected/7ed03dcf-4bbf-441d-b17f-f534b9640183-kube-api-access-4qzrd\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.105958 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.105992 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ed03dcf-4bbf-441d-b17f-f534b9640183-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.191224 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.193002 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.204792 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.207172 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209413 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l7t2\" (UniqueName: \"kubernetes.io/projected/64903172-1b19-4bf2-b44c-1635bf00ca14-kube-api-access-9l7t2\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209453 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-svc\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209487 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209547 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209605 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-scripts\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209689 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qzrd\" (UniqueName: \"kubernetes.io/projected/7ed03dcf-4bbf-441d-b17f-f534b9640183-kube-api-access-4qzrd\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209726 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209751 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-config\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209806 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209838 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209867 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ed03dcf-4bbf-441d-b17f-f534b9640183-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.209958 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ed03dcf-4bbf-441d-b17f-f534b9640183-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.231418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-scripts\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.231719 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.232117 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.244190 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.276881 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qzrd\" (UniqueName: \"kubernetes.io/projected/7ed03dcf-4bbf-441d-b17f-f534b9640183-kube-api-access-4qzrd\") pod \"cinder-scheduler-0\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.304442 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.323437 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb14290-2ce4-427b-b636-259f4c7a6dc3-logs\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.323690 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.323721 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.323770 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-config\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.323985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzc9\" (UniqueName: \"kubernetes.io/projected/8bb14290-2ce4-427b-b636-259f4c7a6dc3-kube-api-access-hmzc9\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324050 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324255 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324301 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l7t2\" (UniqueName: \"kubernetes.io/projected/64903172-1b19-4bf2-b44c-1635bf00ca14-kube-api-access-9l7t2\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324333 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-svc\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324354 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bb14290-2ce4-427b-b636-259f4c7a6dc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324415 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.324572 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-scripts\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.336122 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.348035 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-config\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.361554 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-svc\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.362276 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.368945 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.394757 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l7t2\" (UniqueName: \"kubernetes.io/projected/64903172-1b19-4bf2-b44c-1635bf00ca14-kube-api-access-9l7t2\") pod \"dnsmasq-dns-6578955fd5-wbqqj\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.439427 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb14290-2ce4-427b-b636-259f4c7a6dc3-logs\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.440000 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.440089 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzc9\" (UniqueName: \"kubernetes.io/projected/8bb14290-2ce4-427b-b636-259f4c7a6dc3-kube-api-access-hmzc9\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.440159 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.440204 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.440243 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bb14290-2ce4-427b-b636-259f4c7a6dc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.440308 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-scripts\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.457742 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-scripts\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.458076 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb14290-2ce4-427b-b636-259f4c7a6dc3-logs\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.467645 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bb14290-2ce4-427b-b636-259f4c7a6dc3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.467884 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.469903 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.478030 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.483944 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.496835 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzc9\" (UniqueName: \"kubernetes.io/projected/8bb14290-2ce4-427b-b636-259f4c7a6dc3-kube-api-access-hmzc9\") pod \"cinder-api-0\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " pod="openstack/cinder-api-0" Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.550502 4754 generic.go:334] "Generic (PLEG): container finished" podID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerID="e443f17e3a218266fdd89edbb2dc4b3bb8154614b82f61c282613b1c175747c9" exitCode=137 Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.550560 4754 generic.go:334] "Generic (PLEG): container finished" podID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerID="b599a85daa6653dda778e99a7566028a8701f531634e4f9143f9bfdf3d1124f8" exitCode=137 Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.550663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-888954555-c8j52" event={"ID":"78669beb-cdbe-41e0-8897-3bcf16dc9bdb","Type":"ContainerDied","Data":"e443f17e3a218266fdd89edbb2dc4b3bb8154614b82f61c282613b1c175747c9"} Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.550702 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-888954555-c8j52" event={"ID":"78669beb-cdbe-41e0-8897-3bcf16dc9bdb","Type":"ContainerDied","Data":"b599a85daa6653dda778e99a7566028a8701f531634e4f9143f9bfdf3d1124f8"} Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.577255 4754 generic.go:334] "Generic (PLEG): container finished" podID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerID="57288089d738ace640dc14fd322768426539890741742e8c906acf8333882060" exitCode=137 Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.577293 4754 generic.go:334] "Generic (PLEG): container finished" podID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerID="a0015bfcf589b05141ae04d31583003f5a20afafe3728c93e026f062a3e2c7b9" exitCode=137 Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.578297 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df465b89-p9cqb" event={"ID":"0c9800a6-c17a-482a-8f95-134b2df4afba","Type":"ContainerDied","Data":"57288089d738ace640dc14fd322768426539890741742e8c906acf8333882060"} Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.578388 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df465b89-p9cqb" event={"ID":"0c9800a6-c17a-482a-8f95-134b2df4afba","Type":"ContainerDied","Data":"a0015bfcf589b05141ae04d31583003f5a20afafe3728c93e026f062a3e2c7b9"} Feb 18 19:39:05 crc kubenswrapper[4754]: I0218 19:39:05.764043 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:39:06 crc kubenswrapper[4754]: I0218 19:39:06.270624 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:39:06 crc kubenswrapper[4754]: I0218 19:39:06.271703 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:06 crc kubenswrapper[4754]: I0218 19:39:06.361022 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:39:06 crc kubenswrapper[4754]: I0218 19:39:06.412295 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:06 crc kubenswrapper[4754]: I0218 19:39:06.412763 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.390497 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.401259 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xss4g\" (UniqueName: \"kubernetes.io/projected/272c8937-1ebd-44f6-8514-030c1be0af24-kube-api-access-xss4g\") pod \"272c8937-1ebd-44f6-8514-030c1be0af24\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.401347 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-sb\") pod \"272c8937-1ebd-44f6-8514-030c1be0af24\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.401389 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-svc\") pod \"272c8937-1ebd-44f6-8514-030c1be0af24\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.401409 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-config\") pod \"272c8937-1ebd-44f6-8514-030c1be0af24\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.401490 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-swift-storage-0\") pod \"272c8937-1ebd-44f6-8514-030c1be0af24\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.401529 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-nb\") pod \"272c8937-1ebd-44f6-8514-030c1be0af24\" (UID: \"272c8937-1ebd-44f6-8514-030c1be0af24\") " Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.424093 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272c8937-1ebd-44f6-8514-030c1be0af24-kube-api-access-xss4g" (OuterVolumeSpecName: "kube-api-access-xss4g") pod "272c8937-1ebd-44f6-8514-030c1be0af24" (UID: "272c8937-1ebd-44f6-8514-030c1be0af24"). InnerVolumeSpecName "kube-api-access-xss4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.503609 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xss4g\" (UniqueName: \"kubernetes.io/projected/272c8937-1ebd-44f6-8514-030c1be0af24-kube-api-access-xss4g\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.542948 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "272c8937-1ebd-44f6-8514-030c1be0af24" (UID: "272c8937-1ebd-44f6-8514-030c1be0af24"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.551500 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "272c8937-1ebd-44f6-8514-030c1be0af24" (UID: "272c8937-1ebd-44f6-8514-030c1be0af24"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.578650 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-config" (OuterVolumeSpecName: "config") pod "272c8937-1ebd-44f6-8514-030c1be0af24" (UID: "272c8937-1ebd-44f6-8514-030c1be0af24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.611742 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "272c8937-1ebd-44f6-8514-030c1be0af24" (UID: "272c8937-1ebd-44f6-8514-030c1be0af24"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.612849 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.612875 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.612885 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.612894 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.627085 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "272c8937-1ebd-44f6-8514-030c1be0af24" (UID: "272c8937-1ebd-44f6-8514-030c1be0af24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.682539 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" event={"ID":"272c8937-1ebd-44f6-8514-030c1be0af24","Type":"ContainerDied","Data":"0695c6243300be266c9cda6cfa6e6658aabdce4587af422208e92dd9949088b6"} Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.682597 4754 scope.go:117] "RemoveContainer" containerID="b53c6d0af1d2f8757741b56793f8d09287a144ddf30d6d4436ad6a9111373ad8" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.682762 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.715224 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/272c8937-1ebd-44f6-8514-030c1be0af24-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.766443 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jlqj8"] Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.797243 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-jlqj8"] Feb 18 19:39:07 crc kubenswrapper[4754]: I0218 19:39:07.805976 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:08 crc kubenswrapper[4754]: I0218 19:39:08.097339 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:39:08 crc kubenswrapper[4754]: I0218 19:39:08.097412 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:39:08 crc kubenswrapper[4754]: I0218 19:39:08.248621 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" path="/var/lib/kubelet/pods/272c8937-1ebd-44f6-8514-030c1be0af24/volumes" Feb 18 19:39:08 crc kubenswrapper[4754]: I0218 19:39:08.511291 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.423777 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57cff76b44-jxsl5" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.541825 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cd4746946-gww6b"] Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.542214 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cd4746946-gww6b" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api-log" containerID="cri-o://9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8" gracePeriod=30 Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.542995 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cd4746946-gww6b" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api" containerID="cri-o://1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35" gracePeriod=30 Feb 18 19:39:09 crc kubenswrapper[4754]: E0218 19:39:09.561327 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 18 19:39:09 crc kubenswrapper[4754]: E0218 19:39:09.561572 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdrtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(742e0717-1560-424d-b0d3-4e7b46f8ec8c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:39:09 crc kubenswrapper[4754]: E0218 19:39:09.565167 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.566659 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cd4746946-gww6b" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": EOF" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.642783 4754 scope.go:117] "RemoveContainer" containerID="66b4dcfe75f794c2ace7f64c76a5d413ee5a5420af6afdc0e8a9008d4642c345" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.749721 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.764459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-888954555-c8j52" event={"ID":"78669beb-cdbe-41e0-8897-3bcf16dc9bdb","Type":"ContainerDied","Data":"be3930fb5efc913a2038d7acbb68db2a6c68fa9e040d751f5da66d509d8807cd"} Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.764508 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3930fb5efc913a2038d7acbb68db2a6c68fa9e040d751f5da66d509d8807cd" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.793340 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-888954555-c8j52" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.807971 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69df465b89-p9cqb" event={"ID":"0c9800a6-c17a-482a-8f95-134b2df4afba","Type":"ContainerDied","Data":"3f2f6843c3ceadd8976f8f4657f4e6923a2f54e4df2405e46aa7cc7d7e9baf9c"} Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.808109 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69df465b89-p9cqb" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.831470 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="ceilometer-notification-agent" containerID="cri-o://88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23" gracePeriod=30 Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.831878 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="sg-core" containerID="cri-o://11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2" gracePeriod=30 Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.880881 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-config-data\") pod \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.880946 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-scripts\") pod \"0c9800a6-c17a-482a-8f95-134b2df4afba\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.880976 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfl2f\" (UniqueName: \"kubernetes.io/projected/0c9800a6-c17a-482a-8f95-134b2df4afba-kube-api-access-mfl2f\") pod \"0c9800a6-c17a-482a-8f95-134b2df4afba\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881038 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-scripts\") pod \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881124 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-config-data\") pod \"0c9800a6-c17a-482a-8f95-134b2df4afba\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881204 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-logs\") pod \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881255 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-horizon-secret-key\") pod \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881309 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9800a6-c17a-482a-8f95-134b2df4afba-logs\") pod \"0c9800a6-c17a-482a-8f95-134b2df4afba\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881324 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzkmb\" (UniqueName: \"kubernetes.io/projected/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-kube-api-access-rzkmb\") pod \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\" (UID: \"78669beb-cdbe-41e0-8897-3bcf16dc9bdb\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.881339 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c9800a6-c17a-482a-8f95-134b2df4afba-horizon-secret-key\") pod \"0c9800a6-c17a-482a-8f95-134b2df4afba\" (UID: \"0c9800a6-c17a-482a-8f95-134b2df4afba\") " Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.895642 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c9800a6-c17a-482a-8f95-134b2df4afba-logs" (OuterVolumeSpecName: "logs") pod "0c9800a6-c17a-482a-8f95-134b2df4afba" (UID: "0c9800a6-c17a-482a-8f95-134b2df4afba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.898690 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-logs" (OuterVolumeSpecName: "logs") pod "78669beb-cdbe-41e0-8897-3bcf16dc9bdb" (UID: "78669beb-cdbe-41e0-8897-3bcf16dc9bdb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.898820 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9800a6-c17a-482a-8f95-134b2df4afba-kube-api-access-mfl2f" (OuterVolumeSpecName: "kube-api-access-mfl2f") pod "0c9800a6-c17a-482a-8f95-134b2df4afba" (UID: "0c9800a6-c17a-482a-8f95-134b2df4afba"). InnerVolumeSpecName "kube-api-access-mfl2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.900388 4754 scope.go:117] "RemoveContainer" containerID="57288089d738ace640dc14fd322768426539890741742e8c906acf8333882060" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.901235 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c9800a6-c17a-482a-8f95-134b2df4afba-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0c9800a6-c17a-482a-8f95-134b2df4afba" (UID: "0c9800a6-c17a-482a-8f95-134b2df4afba"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.911985 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "78669beb-cdbe-41e0-8897-3bcf16dc9bdb" (UID: "78669beb-cdbe-41e0-8897-3bcf16dc9bdb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.914586 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-kube-api-access-rzkmb" (OuterVolumeSpecName: "kube-api-access-rzkmb") pod "78669beb-cdbe-41e0-8897-3bcf16dc9bdb" (UID: "78669beb-cdbe-41e0-8897-3bcf16dc9bdb"). InnerVolumeSpecName "kube-api-access-rzkmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.926216 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-scripts" (OuterVolumeSpecName: "scripts") pod "0c9800a6-c17a-482a-8f95-134b2df4afba" (UID: "0c9800a6-c17a-482a-8f95-134b2df4afba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.931194 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-config-data" (OuterVolumeSpecName: "config-data") pod "0c9800a6-c17a-482a-8f95-134b2df4afba" (UID: "0c9800a6-c17a-482a-8f95-134b2df4afba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.983721 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-scripts" (OuterVolumeSpecName: "scripts") pod "78669beb-cdbe-41e0-8897-3bcf16dc9bdb" (UID: "78669beb-cdbe-41e0-8897-3bcf16dc9bdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986599 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986637 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfl2f\" (UniqueName: \"kubernetes.io/projected/0c9800a6-c17a-482a-8f95-134b2df4afba-kube-api-access-mfl2f\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986649 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986662 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c9800a6-c17a-482a-8f95-134b2df4afba-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986675 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986689 4754 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986701 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c9800a6-c17a-482a-8f95-134b2df4afba-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986712 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzkmb\" (UniqueName: \"kubernetes.io/projected/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-kube-api-access-rzkmb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:09 crc kubenswrapper[4754]: I0218 19:39:09.986723 4754 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c9800a6-c17a-482a-8f95-134b2df4afba-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.005527 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-config-data" (OuterVolumeSpecName: "config-data") pod "78669beb-cdbe-41e0-8897-3bcf16dc9bdb" (UID: "78669beb-cdbe-41e0-8897-3bcf16dc9bdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.088818 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78669beb-cdbe-41e0-8897-3bcf16dc9bdb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.129092 4754 scope.go:117] "RemoveContainer" containerID="a0015bfcf589b05141ae04d31583003f5a20afafe3728c93e026f062a3e2c7b9" Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.156914 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69df465b89-p9cqb"] Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.169106 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69df465b89-p9cqb"] Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.222108 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" path="/var/lib/kubelet/pods/0c9800a6-c17a-482a-8f95-134b2df4afba/volumes" Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.279227 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c78c979c7-gflt2"] Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.326251 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:10 crc kubenswrapper[4754]: W0218 19:39:10.327873 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ed03dcf_4bbf_441d_b17f_f534b9640183.slice/crio-9adea094fd785fd192f6299fb51dc0c629b2801a1c19fb45cc73d1ff6b1e3b05 WatchSource:0}: Error finding container 9adea094fd785fd192f6299fb51dc0c629b2801a1c19fb45cc73d1ff6b1e3b05: Status 404 returned error can't find the container with id 9adea094fd785fd192f6299fb51dc0c629b2801a1c19fb45cc73d1ff6b1e3b05 Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.463858 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.556889 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wbqqj"] Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.891423 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"742e0717-1560-424d-b0d3-4e7b46f8ec8c","Type":"ContainerDied","Data":"11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.897216 4754 generic.go:334] "Generic (PLEG): container finished" podID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerID="11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2" exitCode=2 Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.914537 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" event={"ID":"64903172-1b19-4bf2-b44c-1635bf00ca14","Type":"ContainerStarted","Data":"f43642ba27f947fc7a2cbf49477a3a559e570c93cad57c29052cc49d92f95766"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.914586 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" event={"ID":"64903172-1b19-4bf2-b44c-1635bf00ca14","Type":"ContainerStarted","Data":"745da8de126ae7f90459ef5ba06f450d43f91078b1451391b68dacb5178b9792"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.923406 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c78c979c7-gflt2" event={"ID":"007a5628-fc57-4566-9ed0-35df973f14ab","Type":"ContainerStarted","Data":"7b3ab37fbb92f7ed00571df741cc527c216706722d3441919ac6f6f2e325f6b3"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.923481 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c78c979c7-gflt2" event={"ID":"007a5628-fc57-4566-9ed0-35df973f14ab","Type":"ContainerStarted","Data":"484e2312ee4de46150be6afe2bfa4c2d5399f59222de744b6bcd59ffb61489a9"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.929096 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8bb14290-2ce4-427b-b636-259f4c7a6dc3","Type":"ContainerStarted","Data":"718c63bcb1025fce4cbd130a3a9185af947e0902f62a3980b0fbd8e49c4b3306"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.944523 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ed03dcf-4bbf-441d-b17f-f534b9640183","Type":"ContainerStarted","Data":"9adea094fd785fd192f6299fb51dc0c629b2801a1c19fb45cc73d1ff6b1e3b05"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.953662 4754 generic.go:334] "Generic (PLEG): container finished" podID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerID="9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8" exitCode=143 Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.953778 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-888954555-c8j52" Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.954765 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cd4746946-gww6b" event={"ID":"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b","Type":"ContainerDied","Data":"9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8"} Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.991120 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-888954555-c8j52"] Feb 18 19:39:10 crc kubenswrapper[4754]: I0218 19:39:10.999615 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-888954555-c8j52"] Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:11.999861 4754 generic.go:334] "Generic (PLEG): container finished" podID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerID="f43642ba27f947fc7a2cbf49477a3a559e570c93cad57c29052cc49d92f95766" exitCode=0 Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.000246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" event={"ID":"64903172-1b19-4bf2-b44c-1635bf00ca14","Type":"ContainerDied","Data":"f43642ba27f947fc7a2cbf49477a3a559e570c93cad57c29052cc49d92f95766"} Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.000283 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" event={"ID":"64903172-1b19-4bf2-b44c-1635bf00ca14","Type":"ContainerStarted","Data":"ab9048b237e67f20e5f121078b52c627d3989cccf4a25b57389019a14a236035"} Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.000343 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.005399 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c78c979c7-gflt2" event={"ID":"007a5628-fc57-4566-9ed0-35df973f14ab","Type":"ContainerStarted","Data":"c6556c58aa350995a80f6d540956ae8b937523dde5ecb7073e9c621505429d3c"} Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.005626 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.017229 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8bb14290-2ce4-427b-b636-259f4c7a6dc3","Type":"ContainerStarted","Data":"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9"} Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.042769 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" podStartSLOduration=8.042742489 podStartE2EDuration="8.042742489s" podCreationTimestamp="2026-02-18 19:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:12.04081312 +0000 UTC m=+1254.491225916" watchObservedRunningTime="2026-02-18 19:39:12.042742489 +0000 UTC m=+1254.493155285" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.077545 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c78c979c7-gflt2" podStartSLOduration=10.077517929 podStartE2EDuration="10.077517929s" podCreationTimestamp="2026-02-18 19:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:12.075048353 +0000 UTC m=+1254.525461149" watchObservedRunningTime="2026-02-18 19:39:12.077517929 +0000 UTC m=+1254.527930715" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.119390 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-jlqj8" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.242237 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" path="/var/lib/kubelet/pods/78669beb-cdbe-41e0-8897-3bcf16dc9bdb/volumes" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.984484 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cd4746946-gww6b" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": read tcp 10.217.0.2:57206->10.217.0.178:9311: read: connection reset by peer" Feb 18 19:39:12 crc kubenswrapper[4754]: I0218 19:39:12.984487 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cd4746946-gww6b" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.178:9311/healthcheck\": read tcp 10.217.0.2:57194->10.217.0.178:9311: read: connection reset by peer" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.034133 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8bb14290-2ce4-427b-b636-259f4c7a6dc3","Type":"ContainerStarted","Data":"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33"} Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.034356 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api-log" containerID="cri-o://1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9" gracePeriod=30 Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.034394 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.034450 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api" containerID="cri-o://dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33" gracePeriod=30 Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.038460 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ed03dcf-4bbf-441d-b17f-f534b9640183","Type":"ContainerStarted","Data":"db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb"} Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.038490 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ed03dcf-4bbf-441d-b17f-f534b9640183","Type":"ContainerStarted","Data":"3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7"} Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.059354 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.059331595 podStartE2EDuration="8.059331595s" podCreationTimestamp="2026-02-18 19:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:13.056188228 +0000 UTC m=+1255.506601034" watchObservedRunningTime="2026-02-18 19:39:13.059331595 +0000 UTC m=+1255.509744391" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.115122 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.1819415 podStartE2EDuration="9.11509872s" podCreationTimestamp="2026-02-18 19:39:04 +0000 UTC" firstStartedPulling="2026-02-18 19:39:10.330724245 +0000 UTC m=+1252.781137041" lastFinishedPulling="2026-02-18 19:39:11.263881465 +0000 UTC m=+1253.714294261" observedRunningTime="2026-02-18 19:39:13.102669898 +0000 UTC m=+1255.553082694" watchObservedRunningTime="2026-02-18 19:39:13.11509872 +0000 UTC m=+1255.565511516" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.694481 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.778218 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.859091 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-logs\") pod \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.859173 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data-custom\") pod \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.859218 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data\") pod \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.859348 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75cdt\" (UniqueName: \"kubernetes.io/projected/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-kube-api-access-75cdt\") pod \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.859526 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-combined-ca-bundle\") pod \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\" (UID: \"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.859973 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-logs" (OuterVolumeSpecName: "logs") pod "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" (UID: "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.860156 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.867731 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" (UID: "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.869484 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-kube-api-access-75cdt" (OuterVolumeSpecName: "kube-api-access-75cdt") pod "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" (UID: "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b"). InnerVolumeSpecName "kube-api-access-75cdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.902250 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" (UID: "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.913600 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data" (OuterVolumeSpecName: "config-data") pod "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" (UID: "624cdacb-75c4-4a8b-86f8-8f0d451b6c6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.961926 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-scripts\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962044 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bb14290-2ce4-427b-b636-259f4c7a6dc3-etc-machine-id\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962154 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb14290-2ce4-427b-b636-259f4c7a6dc3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-combined-ca-bundle\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962296 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmzc9\" (UniqueName: \"kubernetes.io/projected/8bb14290-2ce4-427b-b636-259f4c7a6dc3-kube-api-access-hmzc9\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962393 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb14290-2ce4-427b-b636-259f4c7a6dc3-logs\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962424 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data-custom\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962442 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data\") pod \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\" (UID: \"8bb14290-2ce4-427b-b636-259f4c7a6dc3\") " Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.962983 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb14290-2ce4-427b-b636-259f4c7a6dc3-logs" (OuterVolumeSpecName: "logs") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.964614 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.964660 4754 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bb14290-2ce4-427b-b636-259f4c7a6dc3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.964674 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.964686 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.964699 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75cdt\" (UniqueName: \"kubernetes.io/projected/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b-kube-api-access-75cdt\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.964714 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8bb14290-2ce4-427b-b636-259f4c7a6dc3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.967706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-scripts" (OuterVolumeSpecName: "scripts") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.967833 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb14290-2ce4-427b-b636-259f4c7a6dc3-kube-api-access-hmzc9" (OuterVolumeSpecName: "kube-api-access-hmzc9") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "kube-api-access-hmzc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:13 crc kubenswrapper[4754]: I0218 19:39:13.977766 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.001678 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.019857 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data" (OuterVolumeSpecName: "config-data") pod "8bb14290-2ce4-427b-b636-259f4c7a6dc3" (UID: "8bb14290-2ce4-427b-b636-259f4c7a6dc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.073985 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.074040 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.074058 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.074075 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bb14290-2ce4-427b-b636-259f4c7a6dc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.074136 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmzc9\" (UniqueName: \"kubernetes.io/projected/8bb14290-2ce4-427b-b636-259f4c7a6dc3-kube-api-access-hmzc9\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079066 4754 generic.go:334] "Generic (PLEG): container finished" podID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerID="dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33" exitCode=0 Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079117 4754 generic.go:334] "Generic (PLEG): container finished" podID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerID="1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9" exitCode=143 Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079219 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8bb14290-2ce4-427b-b636-259f4c7a6dc3","Type":"ContainerDied","Data":"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33"} Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079266 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8bb14290-2ce4-427b-b636-259f4c7a6dc3","Type":"ContainerDied","Data":"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9"} Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079302 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8bb14290-2ce4-427b-b636-259f4c7a6dc3","Type":"ContainerDied","Data":"718c63bcb1025fce4cbd130a3a9185af947e0902f62a3980b0fbd8e49c4b3306"} Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079331 4754 scope.go:117] "RemoveContainer" containerID="dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.079597 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.096981 4754 generic.go:334] "Generic (PLEG): container finished" podID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerID="1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35" exitCode=0 Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.097119 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cd4746946-gww6b" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.097216 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cd4746946-gww6b" event={"ID":"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b","Type":"ContainerDied","Data":"1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35"} Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.097263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cd4746946-gww6b" event={"ID":"624cdacb-75c4-4a8b-86f8-8f0d451b6c6b","Type":"ContainerDied","Data":"17a165c11c9acffb076945092b00aae2c89885d1bb0403862dee339f2c9a9d69"} Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.168034 4754 scope.go:117] "RemoveContainer" containerID="1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.181254 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cd4746946-gww6b"] Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.202669 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7cd4746946-gww6b"] Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.212676 4754 scope.go:117] "RemoveContainer" containerID="dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.213809 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33\": container with ID starting with dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33 not found: ID does not exist" containerID="dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.213873 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33"} err="failed to get container status \"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33\": rpc error: code = NotFound desc = could not find container \"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33\": container with ID starting with dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33 not found: ID does not exist" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.213906 4754 scope.go:117] "RemoveContainer" containerID="1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.214352 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9\": container with ID starting with 1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9 not found: ID does not exist" containerID="1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.214475 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9"} err="failed to get container status \"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9\": rpc error: code = NotFound desc = could not find container \"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9\": container with ID starting with 1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9 not found: ID does not exist" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.214551 4754 scope.go:117] "RemoveContainer" containerID="dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.223528 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33"} err="failed to get container status \"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33\": rpc error: code = NotFound desc = could not find container \"dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33\": container with ID starting with dce3d8a1317192324a43ee0205275aa103350f3fe2f3d7f2e0aec9dbf8692a33 not found: ID does not exist" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.223617 4754 scope.go:117] "RemoveContainer" containerID="1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.224319 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9"} err="failed to get container status \"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9\": rpc error: code = NotFound desc = could not find container \"1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9\": container with ID starting with 1afdee6d88555c5162e86e4f7e56942d386b55b66fc246542b0fa8fe4d85a5d9 not found: ID does not exist" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.224383 4754 scope.go:117] "RemoveContainer" containerID="1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.262919 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" path="/var/lib/kubelet/pods/624cdacb-75c4-4a8b-86f8-8f0d451b6c6b/volumes" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.263831 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.263973 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.265696 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266083 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="dnsmasq-dns" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266161 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="dnsmasq-dns" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266227 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266277 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api-log" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266338 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266389 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266446 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266503 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266564 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266617 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon-log" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266687 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266737 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266800 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266855 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon-log" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.266909 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="init" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.266957 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="init" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.267021 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267074 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.267125 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267189 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267416 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267481 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267535 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267601 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267660 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="624cdacb-75c4-4a8b-86f8-8f0d451b6c6b" containerName="barbican-api-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267714 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="78669beb-cdbe-41e0-8897-3bcf16dc9bdb" containerName="horizon-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267765 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="272c8937-1ebd-44f6-8514-030c1be0af24" containerName="dnsmasq-dns" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267819 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9800a6-c17a-482a-8f95-134b2df4afba" containerName="horizon-log" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.267875 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" containerName="cinder-api" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.269030 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.274289 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.277704 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbvt\" (UniqueName: \"kubernetes.io/projected/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-kube-api-access-xxbvt\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.277794 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.278552 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.278663 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.278739 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-config-data\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.278783 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.278826 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-scripts\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.279052 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.279096 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-logs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.279169 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.280267 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.280681 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.282791 4754 scope.go:117] "RemoveContainer" containerID="9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.319739 4754 scope.go:117] "RemoveContainer" containerID="1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.322091 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35\": container with ID starting with 1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35 not found: ID does not exist" containerID="1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.322133 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35"} err="failed to get container status \"1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35\": rpc error: code = NotFound desc = could not find container \"1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35\": container with ID starting with 1f4103b72b0bff0efa28367232c952c5cf3387f56dd65ccc252f3347851f0d35 not found: ID does not exist" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.322181 4754 scope.go:117] "RemoveContainer" containerID="9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8" Feb 18 19:39:14 crc kubenswrapper[4754]: E0218 19:39:14.322706 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8\": container with ID starting with 9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8 not found: ID does not exist" containerID="9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.322740 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8"} err="failed to get container status \"9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8\": rpc error: code = NotFound desc = could not find container \"9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8\": container with ID starting with 9dd347a1ce511272d3e335fab32f234b71131d807005f2523c44950f1d7afcd8 not found: ID does not exist" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.384926 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-config-data\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.384984 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385009 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-scripts\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385111 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385153 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-logs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385186 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385210 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbvt\" (UniqueName: \"kubernetes.io/projected/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-kube-api-access-xxbvt\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385279 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385303 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.385427 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.388068 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-logs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.390272 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.390426 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-scripts\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.390690 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.391452 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.392457 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.404907 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbvt\" (UniqueName: \"kubernetes.io/projected/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-kube-api-access-xxbvt\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.408133 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bdf4e84-8bf7-4087-95d3-9cb6beea369d-config-data\") pod \"cinder-api-0\" (UID: \"2bdf4e84-8bf7-4087-95d3-9cb6beea369d\") " pod="openstack/cinder-api-0" Feb 18 19:39:14 crc kubenswrapper[4754]: I0218 19:39:14.602500 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:39:15 crc kubenswrapper[4754]: I0218 19:39:15.094476 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:39:15 crc kubenswrapper[4754]: W0218 19:39:15.110254 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bdf4e84_8bf7_4087_95d3_9cb6beea369d.slice/crio-380a20f1dec4042caf1cddd62182ae326b4ed118301c2a88e4d4845701bb172d WatchSource:0}: Error finding container 380a20f1dec4042caf1cddd62182ae326b4ed118301c2a88e4d4845701bb172d: Status 404 returned error can't find the container with id 380a20f1dec4042caf1cddd62182ae326b4ed118301c2a88e4d4845701bb172d Feb 18 19:39:15 crc kubenswrapper[4754]: I0218 19:39:15.243952 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:39:15 crc kubenswrapper[4754]: I0218 19:39:15.306294 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:39:16 crc kubenswrapper[4754]: I0218 19:39:16.186162 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2bdf4e84-8bf7-4087-95d3-9cb6beea369d","Type":"ContainerStarted","Data":"ed6f111f0af9bbf9b4691bc6b41d2e15d9119dcc6dc977832222ff326323067a"} Feb 18 19:39:16 crc kubenswrapper[4754]: I0218 19:39:16.186594 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2bdf4e84-8bf7-4087-95d3-9cb6beea369d","Type":"ContainerStarted","Data":"380a20f1dec4042caf1cddd62182ae326b4ed118301c2a88e4d4845701bb172d"} Feb 18 19:39:16 crc kubenswrapper[4754]: I0218 19:39:16.224518 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb14290-2ce4-427b-b636-259f4c7a6dc3" path="/var/lib/kubelet/pods/8bb14290-2ce4-427b-b636-259f4c7a6dc3/volumes" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.017558 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.097600 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-config\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.097786 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-ovndb-tls-certs\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.097807 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-httpd-config\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.097874 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-public-tls-certs\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.097895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-internal-tls-certs\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.097915 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-combined-ca-bundle\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.098066 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qfbw\" (UniqueName: \"kubernetes.io/projected/5d382047-c43a-4f82-8982-106e10d65430-kube-api-access-7qfbw\") pod \"5d382047-c43a-4f82-8982-106e10d65430\" (UID: \"5d382047-c43a-4f82-8982-106e10d65430\") " Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.121515 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.125521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d382047-c43a-4f82-8982-106e10d65430-kube-api-access-7qfbw" (OuterVolumeSpecName: "kube-api-access-7qfbw") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "kube-api-access-7qfbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.177301 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.183444 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.188657 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.194329 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-config" (OuterVolumeSpecName: "config") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.200960 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.200998 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.201016 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.201028 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.201040 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qfbw\" (UniqueName: \"kubernetes.io/projected/5d382047-c43a-4f82-8982-106e10d65430-kube-api-access-7qfbw\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.201051 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.201894 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2bdf4e84-8bf7-4087-95d3-9cb6beea369d","Type":"ContainerStarted","Data":"2f090ed6a00bf41969f07fb4d68f2514d7c50b00c76913cb882fca1a245071bf"} Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.203179 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.205699 4754 generic.go:334] "Generic (PLEG): container finished" podID="5d382047-c43a-4f82-8982-106e10d65430" containerID="707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee" exitCode=0 Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.205749 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445745d9-8xbmt" event={"ID":"5d382047-c43a-4f82-8982-106e10d65430","Type":"ContainerDied","Data":"707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee"} Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.205777 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445745d9-8xbmt" event={"ID":"5d382047-c43a-4f82-8982-106e10d65430","Type":"ContainerDied","Data":"5b028b6d0b86ce82265e56b79099751123607adb39ac7173710f1e5b2560a815"} Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.205799 4754 scope.go:117] "RemoveContainer" containerID="57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.205944 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445745d9-8xbmt" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.259528 4754 scope.go:117] "RemoveContainer" containerID="707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.265627 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5d382047-c43a-4f82-8982-106e10d65430" (UID: "5d382047-c43a-4f82-8982-106e10d65430"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.305518 4754 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d382047-c43a-4f82-8982-106e10d65430-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.305803 4754 scope.go:117] "RemoveContainer" containerID="57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213" Feb 18 19:39:17 crc kubenswrapper[4754]: E0218 19:39:17.306531 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213\": container with ID starting with 57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213 not found: ID does not exist" containerID="57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.306605 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213"} err="failed to get container status \"57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213\": rpc error: code = NotFound desc = could not find container \"57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213\": container with ID starting with 57ff2fdc7943793e6161550cdedb085aa6389fd5e52e082c88c7fabdd0d7a213 not found: ID does not exist" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.306643 4754 scope.go:117] "RemoveContainer" containerID="707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee" Feb 18 19:39:17 crc kubenswrapper[4754]: E0218 19:39:17.307009 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee\": container with ID starting with 707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee not found: ID does not exist" containerID="707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.307045 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee"} err="failed to get container status \"707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee\": rpc error: code = NotFound desc = could not find container \"707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee\": container with ID starting with 707a61dd7d4589603fc20973ee023db4d07115f81a8dcde258077d8dcad555ee not found: ID does not exist" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.378571 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.411282 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.411256422 podStartE2EDuration="3.411256422s" podCreationTimestamp="2026-02-18 19:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:17.260126004 +0000 UTC m=+1259.710538810" watchObservedRunningTime="2026-02-18 19:39:17.411256422 +0000 UTC m=+1259.861669218" Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.541987 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86445745d9-8xbmt"] Feb 18 19:39:17 crc kubenswrapper[4754]: I0218 19:39:17.555760 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86445745d9-8xbmt"] Feb 18 19:39:18 crc kubenswrapper[4754]: I0218 19:39:18.248923 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d382047-c43a-4f82-8982-106e10d65430" path="/var/lib/kubelet/pods/5d382047-c43a-4f82-8982-106e10d65430/volumes" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.114123 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167501 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdrtv\" (UniqueName: \"kubernetes.io/projected/742e0717-1560-424d-b0d3-4e7b46f8ec8c-kube-api-access-bdrtv\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167565 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-run-httpd\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167618 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-config-data\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167665 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-scripts\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167705 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-combined-ca-bundle\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167724 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-sg-core-conf-yaml\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.167743 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-log-httpd\") pod \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\" (UID: \"742e0717-1560-424d-b0d3-4e7b46f8ec8c\") " Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.168279 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.168878 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.191762 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-scripts" (OuterVolumeSpecName: "scripts") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.192714 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742e0717-1560-424d-b0d3-4e7b46f8ec8c-kube-api-access-bdrtv" (OuterVolumeSpecName: "kube-api-access-bdrtv") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "kube-api-access-bdrtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.235243 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.294861 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdrtv\" (UniqueName: \"kubernetes.io/projected/742e0717-1560-424d-b0d3-4e7b46f8ec8c-kube-api-access-bdrtv\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.294893 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.294903 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.294914 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.294925 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/742e0717-1560-424d-b0d3-4e7b46f8ec8c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.299376 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.301036 4754 generic.go:334] "Generic (PLEG): container finished" podID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerID="88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23" exitCode=0 Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.301113 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"742e0717-1560-424d-b0d3-4e7b46f8ec8c","Type":"ContainerDied","Data":"88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23"} Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.301180 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"742e0717-1560-424d-b0d3-4e7b46f8ec8c","Type":"ContainerDied","Data":"3cc87e443e04924f45dd38ce82d4dbd762fe70942a83e280aaddf9e17de56785"} Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.301209 4754 scope.go:117] "RemoveContainer" containerID="11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.301407 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.327349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-config-data" (OuterVolumeSpecName: "config-data") pod "742e0717-1560-424d-b0d3-4e7b46f8ec8c" (UID: "742e0717-1560-424d-b0d3-4e7b46f8ec8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.341599 4754 scope.go:117] "RemoveContainer" containerID="88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.384585 4754 scope.go:117] "RemoveContainer" containerID="11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2" Feb 18 19:39:20 crc kubenswrapper[4754]: E0218 19:39:20.387721 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2\": container with ID starting with 11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2 not found: ID does not exist" containerID="11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.387776 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2"} err="failed to get container status \"11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2\": rpc error: code = NotFound desc = could not find container \"11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2\": container with ID starting with 11d141f0bac541db92edc6c262a59d160bc1dba4ddd0b667fbe30a91d42f0df2 not found: ID does not exist" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.387808 4754 scope.go:117] "RemoveContainer" containerID="88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23" Feb 18 19:39:20 crc kubenswrapper[4754]: E0218 19:39:20.388644 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23\": container with ID starting with 88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23 not found: ID does not exist" containerID="88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.388727 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23"} err="failed to get container status \"88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23\": rpc error: code = NotFound desc = could not find container \"88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23\": container with ID starting with 88b97ccf66a7afe2041000e54869e0c49270685ba81a68eb30cc7c638a205f23 not found: ID does not exist" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.397039 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.397088 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/742e0717-1560-424d-b0d3-4e7b46f8ec8c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.472293 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.560813 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bd9t"] Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.561129 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" containerName="dnsmasq-dns" containerID="cri-o://7b3057f07dfc37f8114e546b1215f6506765a7cfd37e3ebdd345974f4b685507" gracePeriod=10 Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.678556 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.752923 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.895741 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.920285 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.938552 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:20 crc kubenswrapper[4754]: E0218 19:39:20.943749 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="sg-core" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.943792 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="sg-core" Feb 18 19:39:20 crc kubenswrapper[4754]: E0218 19:39:20.943815 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-httpd" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.943825 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-httpd" Feb 18 19:39:20 crc kubenswrapper[4754]: E0218 19:39:20.943845 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="ceilometer-notification-agent" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.943853 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="ceilometer-notification-agent" Feb 18 19:39:20 crc kubenswrapper[4754]: E0218 19:39:20.943872 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-api" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.943881 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-api" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.944131 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-api" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.944203 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="sg-core" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.944219 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d382047-c43a-4f82-8982-106e10d65430" containerName="neutron-httpd" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.944232 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" containerName="ceilometer-notification-agent" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.946990 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.949784 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.960950 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:39:20 crc kubenswrapper[4754]: I0218 19:39:20.962354 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.045692 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-run-httpd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.045823 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-log-httpd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.045855 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.045938 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-scripts\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.045962 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.045994 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfmd\" (UniqueName: \"kubernetes.io/projected/51d3b1e7-508a-4e64-b50d-fd7b03727405-kube-api-access-ckfmd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.046014 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-config-data\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151707 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-log-httpd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151762 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151821 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-scripts\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151844 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151866 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfmd\" (UniqueName: \"kubernetes.io/projected/51d3b1e7-508a-4e64-b50d-fd7b03727405-kube-api-access-ckfmd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151884 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-config-data\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.151925 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-run-httpd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.152447 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-run-httpd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.158405 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-log-httpd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.159199 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.163684 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.174058 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-scripts\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.187420 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckfmd\" (UniqueName: \"kubernetes.io/projected/51d3b1e7-508a-4e64-b50d-fd7b03727405-kube-api-access-ckfmd\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.187910 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-config-data\") pod \"ceilometer-0\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.309645 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.327629 4754 generic.go:334] "Generic (PLEG): container finished" podID="bb038046-b50b-427e-8c6e-8106009fea7d" containerID="7b3057f07dfc37f8114e546b1215f6506765a7cfd37e3ebdd345974f4b685507" exitCode=0 Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.327947 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="cinder-scheduler" containerID="cri-o://3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7" gracePeriod=30 Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.328244 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" event={"ID":"bb038046-b50b-427e-8c6e-8106009fea7d","Type":"ContainerDied","Data":"7b3057f07dfc37f8114e546b1215f6506765a7cfd37e3ebdd345974f4b685507"} Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.328752 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="probe" containerID="cri-o://db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb" gracePeriod=30 Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.378869 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.436231 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.460915 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-nb\") pod \"bb038046-b50b-427e-8c6e-8106009fea7d\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.461005 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-svc\") pod \"bb038046-b50b-427e-8c6e-8106009fea7d\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.461022 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-config\") pod \"bb038046-b50b-427e-8c6e-8106009fea7d\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.461038 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-sb\") pod \"bb038046-b50b-427e-8c6e-8106009fea7d\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.461128 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-swift-storage-0\") pod \"bb038046-b50b-427e-8c6e-8106009fea7d\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.461693 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf79q\" (UniqueName: \"kubernetes.io/projected/bb038046-b50b-427e-8c6e-8106009fea7d-kube-api-access-gf79q\") pod \"bb038046-b50b-427e-8c6e-8106009fea7d\" (UID: \"bb038046-b50b-427e-8c6e-8106009fea7d\") " Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.491939 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb038046-b50b-427e-8c6e-8106009fea7d-kube-api-access-gf79q" (OuterVolumeSpecName: "kube-api-access-gf79q") pod "bb038046-b50b-427e-8c6e-8106009fea7d" (UID: "bb038046-b50b-427e-8c6e-8106009fea7d"). InnerVolumeSpecName "kube-api-access-gf79q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.498789 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.558725 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bb038046-b50b-427e-8c6e-8106009fea7d" (UID: "bb038046-b50b-427e-8c6e-8106009fea7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.565854 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.565889 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf79q\" (UniqueName: \"kubernetes.io/projected/bb038046-b50b-427e-8c6e-8106009fea7d-kube-api-access-gf79q\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.581464 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb038046-b50b-427e-8c6e-8106009fea7d" (UID: "bb038046-b50b-427e-8c6e-8106009fea7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.586786 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-config" (OuterVolumeSpecName: "config") pod "bb038046-b50b-427e-8c6e-8106009fea7d" (UID: "bb038046-b50b-427e-8c6e-8106009fea7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.602832 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb038046-b50b-427e-8c6e-8106009fea7d" (UID: "bb038046-b50b-427e-8c6e-8106009fea7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.615888 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb038046-b50b-427e-8c6e-8106009fea7d" (UID: "bb038046-b50b-427e-8c6e-8106009fea7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.619924 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.685205 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.685242 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.686132 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.686159 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb038046-b50b-427e-8c6e-8106009fea7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:21 crc kubenswrapper[4754]: I0218 19:39:21.769073 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.232711 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742e0717-1560-424d-b0d3-4e7b46f8ec8c" path="/var/lib/kubelet/pods/742e0717-1560-424d-b0d3-4e7b46f8ec8c/volumes" Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.344694 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" event={"ID":"bb038046-b50b-427e-8c6e-8106009fea7d","Type":"ContainerDied","Data":"090b0d71e9c85ba5fba5342cdece9b5a083abd580161bd04dce77b6c0f8aa57d"} Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.344751 4754 scope.go:117] "RemoveContainer" containerID="7b3057f07dfc37f8114e546b1215f6506765a7cfd37e3ebdd345974f4b685507" Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.344940 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bd9t" Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.353421 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerStarted","Data":"c530e9f889dd33edf611e7ea64e9905c88f5cf1edac68fdd3fc13bf3195c1807"} Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.356700 4754 generic.go:334] "Generic (PLEG): container finished" podID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerID="db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb" exitCode=0 Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.356953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ed03dcf-4bbf-441d-b17f-f534b9640183","Type":"ContainerDied","Data":"db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb"} Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.380608 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bd9t"] Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.383914 4754 scope.go:117] "RemoveContainer" containerID="c0cd4e7f2ac47f206a8a3eb9a8c38372f30ab69d4a287cd1bfa526fa4c5cb9e9" Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.389466 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bd9t"] Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.396736 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-cfd5fbcfb-z278z" Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.468896 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8669779bb4-bghj5"] Feb 18 19:39:22 crc kubenswrapper[4754]: I0218 19:39:22.491859 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-644bd8cdcb-r8qtx" Feb 18 19:39:23 crc kubenswrapper[4754]: I0218 19:39:23.367851 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerStarted","Data":"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1"} Feb 18 19:39:23 crc kubenswrapper[4754]: I0218 19:39:23.368195 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerStarted","Data":"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f"} Feb 18 19:39:23 crc kubenswrapper[4754]: I0218 19:39:23.369349 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8669779bb4-bghj5" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-log" containerID="cri-o://30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83" gracePeriod=30 Feb 18 19:39:23 crc kubenswrapper[4754]: I0218 19:39:23.369377 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-8669779bb4-bghj5" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-api" containerID="cri-o://76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78" gracePeriod=30 Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.232298 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" path="/var/lib/kubelet/pods/bb038046-b50b-427e-8c6e-8106009fea7d/volumes" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.290026 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.342606 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qzrd\" (UniqueName: \"kubernetes.io/projected/7ed03dcf-4bbf-441d-b17f-f534b9640183-kube-api-access-4qzrd\") pod \"7ed03dcf-4bbf-441d-b17f-f534b9640183\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.343082 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ed03dcf-4bbf-441d-b17f-f534b9640183-etc-machine-id\") pod \"7ed03dcf-4bbf-441d-b17f-f534b9640183\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.343224 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data-custom\") pod \"7ed03dcf-4bbf-441d-b17f-f534b9640183\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.343227 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ed03dcf-4bbf-441d-b17f-f534b9640183-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7ed03dcf-4bbf-441d-b17f-f534b9640183" (UID: "7ed03dcf-4bbf-441d-b17f-f534b9640183"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.343254 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-combined-ca-bundle\") pod \"7ed03dcf-4bbf-441d-b17f-f534b9640183\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.343312 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-scripts\") pod \"7ed03dcf-4bbf-441d-b17f-f534b9640183\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.343891 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data\") pod \"7ed03dcf-4bbf-441d-b17f-f534b9640183\" (UID: \"7ed03dcf-4bbf-441d-b17f-f534b9640183\") " Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.344391 4754 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7ed03dcf-4bbf-441d-b17f-f534b9640183-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.350488 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-scripts" (OuterVolumeSpecName: "scripts") pod "7ed03dcf-4bbf-441d-b17f-f534b9640183" (UID: "7ed03dcf-4bbf-441d-b17f-f534b9640183"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.358422 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed03dcf-4bbf-441d-b17f-f534b9640183-kube-api-access-4qzrd" (OuterVolumeSpecName: "kube-api-access-4qzrd") pod "7ed03dcf-4bbf-441d-b17f-f534b9640183" (UID: "7ed03dcf-4bbf-441d-b17f-f534b9640183"). InnerVolumeSpecName "kube-api-access-4qzrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.362327 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7ed03dcf-4bbf-441d-b17f-f534b9640183" (UID: "7ed03dcf-4bbf-441d-b17f-f534b9640183"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.389699 4754 generic.go:334] "Generic (PLEG): container finished" podID="522d7b0a-243a-4469-b9f1-d6d838827080" containerID="30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83" exitCode=143 Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.389795 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8669779bb4-bghj5" event={"ID":"522d7b0a-243a-4469-b9f1-d6d838827080","Type":"ContainerDied","Data":"30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83"} Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.401699 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerStarted","Data":"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16"} Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.403767 4754 generic.go:334] "Generic (PLEG): container finished" podID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerID="3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7" exitCode=0 Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.403797 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ed03dcf-4bbf-441d-b17f-f534b9640183","Type":"ContainerDied","Data":"3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7"} Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.403828 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7ed03dcf-4bbf-441d-b17f-f534b9640183","Type":"ContainerDied","Data":"9adea094fd785fd192f6299fb51dc0c629b2801a1c19fb45cc73d1ff6b1e3b05"} Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.403848 4754 scope.go:117] "RemoveContainer" containerID="db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.403949 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.427252 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ed03dcf-4bbf-441d-b17f-f534b9640183" (UID: "7ed03dcf-4bbf-441d-b17f-f534b9640183"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.446317 4754 scope.go:117] "RemoveContainer" containerID="3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.446789 4754 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.446825 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.446852 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.446866 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qzrd\" (UniqueName: \"kubernetes.io/projected/7ed03dcf-4bbf-441d-b17f-f534b9640183-kube-api-access-4qzrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.461032 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data" (OuterVolumeSpecName: "config-data") pod "7ed03dcf-4bbf-441d-b17f-f534b9640183" (UID: "7ed03dcf-4bbf-441d-b17f-f534b9640183"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.479584 4754 scope.go:117] "RemoveContainer" containerID="db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb" Feb 18 19:39:24 crc kubenswrapper[4754]: E0218 19:39:24.480227 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb\": container with ID starting with db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb not found: ID does not exist" containerID="db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.480276 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb"} err="failed to get container status \"db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb\": rpc error: code = NotFound desc = could not find container \"db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb\": container with ID starting with db53c1e00e2cb5fc8d97bf979804881e4f3518abc71b706585a7fa70d86fbafb not found: ID does not exist" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.480306 4754 scope.go:117] "RemoveContainer" containerID="3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7" Feb 18 19:39:24 crc kubenswrapper[4754]: E0218 19:39:24.480849 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7\": container with ID starting with 3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7 not found: ID does not exist" containerID="3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.480910 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7"} err="failed to get container status \"3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7\": rpc error: code = NotFound desc = could not find container \"3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7\": container with ID starting with 3a657c825d040812930b7076694ec65a76741e210aa94d9a4f3104dee648bfc7 not found: ID does not exist" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.549844 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed03dcf-4bbf-441d-b17f-f534b9640183-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.757500 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.778782 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.788860 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:24 crc kubenswrapper[4754]: E0218 19:39:24.789673 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="probe" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.789797 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="probe" Feb 18 19:39:24 crc kubenswrapper[4754]: E0218 19:39:24.789865 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" containerName="init" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.789917 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" containerName="init" Feb 18 19:39:24 crc kubenswrapper[4754]: E0218 19:39:24.790006 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" containerName="dnsmasq-dns" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.790073 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" containerName="dnsmasq-dns" Feb 18 19:39:24 crc kubenswrapper[4754]: E0218 19:39:24.790132 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="cinder-scheduler" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.790219 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="cinder-scheduler" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.790507 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb038046-b50b-427e-8c6e-8106009fea7d" containerName="dnsmasq-dns" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.790600 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="probe" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.790708 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" containerName="cinder-scheduler" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.791985 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.844488 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.873127 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjpz\" (UniqueName: \"kubernetes.io/projected/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-kube-api-access-dqjpz\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.874476 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.874584 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.874860 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.874944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.874973 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.886400 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.976569 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.976642 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.976665 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.976716 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjpz\" (UniqueName: \"kubernetes.io/projected/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-kube-api-access-dqjpz\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.976778 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.976804 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.977487 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.980948 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-scripts\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.981971 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-config-data\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.983707 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:24 crc kubenswrapper[4754]: I0218 19:39:24.998954 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.006297 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjpz\" (UniqueName: \"kubernetes.io/projected/9d0da334-a8b7-4c9f-94fd-5222d4d192a4-kube-api-access-dqjpz\") pod \"cinder-scheduler-0\" (UID: \"9d0da334-a8b7-4c9f-94fd-5222d4d192a4\") " pod="openstack/cinder-scheduler-0" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.017007 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.018389 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.026414 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.027397 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7jrs9" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.027430 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.035123 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.079381 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.084037 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.117664 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8596\" (UniqueName: \"kubernetes.io/projected/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-kube-api-access-g8596\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.117774 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config-secret\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.177570 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.219901 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.221189 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.221315 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8596\" (UniqueName: \"kubernetes.io/projected/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-kube-api-access-g8596\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.221356 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config-secret\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.221081 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.229827 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config-secret\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.234110 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.279937 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8596\" (UniqueName: \"kubernetes.io/projected/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-kube-api-access-g8596\") pod \"openstackclient\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.408603 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.450455 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.460081 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.530178 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.555708 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.555829 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.665554 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-openstack-config\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.665858 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlnq\" (UniqueName: \"kubernetes.io/projected/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-kube-api-access-4jlnq\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.665909 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.665995 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-openstack-config-secret\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: E0218 19:39:25.757402 4754 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 18 19:39:25 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7_0(a66603eb31ef266ccd793928c1cd137fd04bad2ea328f7fbbd18ef046403019e): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a66603eb31ef266ccd793928c1cd137fd04bad2ea328f7fbbd18ef046403019e" Netns:"/var/run/netns/2eff1b05-8141-4d74-ae40-368cd2b42bfd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a66603eb31ef266ccd793928c1cd137fd04bad2ea328f7fbbd18ef046403019e;K8S_POD_UID=8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7]: expected pod UID "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" but got "6653ae5b-194f-49b8-ad3c-0ee3b1612f64" from Kube API Feb 18 19:39:25 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:39:25 crc kubenswrapper[4754]: > Feb 18 19:39:25 crc kubenswrapper[4754]: E0218 19:39:25.757508 4754 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 18 19:39:25 crc kubenswrapper[4754]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7_0(a66603eb31ef266ccd793928c1cd137fd04bad2ea328f7fbbd18ef046403019e): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a66603eb31ef266ccd793928c1cd137fd04bad2ea328f7fbbd18ef046403019e" Netns:"/var/run/netns/2eff1b05-8141-4d74-ae40-368cd2b42bfd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=a66603eb31ef266ccd793928c1cd137fd04bad2ea328f7fbbd18ef046403019e;K8S_POD_UID=8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7]: expected pod UID "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" but got "6653ae5b-194f-49b8-ad3c-0ee3b1612f64" from Kube API Feb 18 19:39:25 crc kubenswrapper[4754]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 19:39:25 crc kubenswrapper[4754]: > pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.767726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlnq\" (UniqueName: \"kubernetes.io/projected/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-kube-api-access-4jlnq\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.767807 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.767869 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-openstack-config-secret\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.767993 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-openstack-config\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.769108 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-openstack-config\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.784719 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-openstack-config-secret\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.787059 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.800729 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlnq\" (UniqueName: \"kubernetes.io/projected/6653ae5b-194f-49b8-ad3c-0ee3b1612f64-kube-api-access-4jlnq\") pod \"openstackclient\" (UID: \"6653ae5b-194f-49b8-ad3c-0ee3b1612f64\") " pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.938545 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:25 crc kubenswrapper[4754]: I0218 19:39:25.988904 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.237889 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed03dcf-4bbf-441d-b17f-f534b9640183" path="/var/lib/kubelet/pods/7ed03dcf-4bbf-441d-b17f-f534b9640183/volumes" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.471418 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d0da334-a8b7-4c9f-94fd-5222d4d192a4","Type":"ContainerStarted","Data":"5d6548d8a8e06270ca2ef6089b34aed892613c49fe448c6f21f5ab0d64803d08"} Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.482249 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.483001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerStarted","Data":"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412"} Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.483333 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:39:26 crc kubenswrapper[4754]: W0218 19:39:26.504047 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6653ae5b_194f_49b8_ad3c_0ee3b1612f64.slice/crio-ccf5a53fe1e15b80b6f5e5ef972c58f6be062c12075ce4209ca9cf346e00f7a1 WatchSource:0}: Error finding container ccf5a53fe1e15b80b6f5e5ef972c58f6be062c12075ce4209ca9cf346e00f7a1: Status 404 returned error can't find the container with id ccf5a53fe1e15b80b6f5e5ef972c58f6be062c12075ce4209ca9cf346e00f7a1 Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.507861 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.515063 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.520682 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.403553153 podStartE2EDuration="6.520670219s" podCreationTimestamp="2026-02-18 19:39:20 +0000 UTC" firstStartedPulling="2026-02-18 19:39:21.788182947 +0000 UTC m=+1264.238595743" lastFinishedPulling="2026-02-18 19:39:25.905300013 +0000 UTC m=+1268.355712809" observedRunningTime="2026-02-18 19:39:26.516093988 +0000 UTC m=+1268.966506784" watchObservedRunningTime="2026-02-18 19:39:26.520670219 +0000 UTC m=+1268.971083015" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.523918 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" podUID="6653ae5b-194f-49b8-ad3c-0ee3b1612f64" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.587658 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-combined-ca-bundle\") pod \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.587760 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config\") pod \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.587865 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config-secret\") pod \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.587913 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8596\" (UniqueName: \"kubernetes.io/projected/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-kube-api-access-g8596\") pod \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\" (UID: \"8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7\") " Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.589071 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" (UID: "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.599864 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" (UID: "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.599983 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" (UID: "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.600048 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-kube-api-access-g8596" (OuterVolumeSpecName: "kube-api-access-g8596") pod "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" (UID: "8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7"). InnerVolumeSpecName "kube-api-access-g8596". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.690908 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.690950 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.690965 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8596\" (UniqueName: \"kubernetes.io/projected/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-kube-api-access-g8596\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:26 crc kubenswrapper[4754]: I0218 19:39:26.690974 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.089549 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207503 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-public-tls-certs\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207628 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-scripts\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207668 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d7b0a-243a-4469-b9f1-d6d838827080-logs\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207717 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-config-data\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207781 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-combined-ca-bundle\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207846 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmvl2\" (UniqueName: \"kubernetes.io/projected/522d7b0a-243a-4469-b9f1-d6d838827080-kube-api-access-cmvl2\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.207894 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-internal-tls-certs\") pod \"522d7b0a-243a-4469-b9f1-d6d838827080\" (UID: \"522d7b0a-243a-4469-b9f1-d6d838827080\") " Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.208400 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/522d7b0a-243a-4469-b9f1-d6d838827080-logs" (OuterVolumeSpecName: "logs") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.223457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522d7b0a-243a-4469-b9f1-d6d838827080-kube-api-access-cmvl2" (OuterVolumeSpecName: "kube-api-access-cmvl2") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "kube-api-access-cmvl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.224497 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-scripts" (OuterVolumeSpecName: "scripts") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.292200 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.311752 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.311783 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/522d7b0a-243a-4469-b9f1-d6d838827080-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.311793 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.311804 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmvl2\" (UniqueName: \"kubernetes.io/projected/522d7b0a-243a-4469-b9f1-d6d838827080-kube-api-access-cmvl2\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.316832 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-config-data" (OuterVolumeSpecName: "config-data") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.333844 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.363771 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "522d7b0a-243a-4469-b9f1-d6d838827080" (UID: "522d7b0a-243a-4469-b9f1-d6d838827080"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.414322 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.414356 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.414369 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/522d7b0a-243a-4469-b9f1-d6d838827080-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.503923 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6653ae5b-194f-49b8-ad3c-0ee3b1612f64","Type":"ContainerStarted","Data":"ccf5a53fe1e15b80b6f5e5ef972c58f6be062c12075ce4209ca9cf346e00f7a1"} Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.507935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d0da334-a8b7-4c9f-94fd-5222d4d192a4","Type":"ContainerStarted","Data":"4c1f346279553f962ac4f8cafcff1df196e41610b1193338939a7e467bff359c"} Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.514318 4754 generic.go:334] "Generic (PLEG): container finished" podID="522d7b0a-243a-4469-b9f1-d6d838827080" containerID="76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78" exitCode=0 Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.514438 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.514653 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8669779bb4-bghj5" event={"ID":"522d7b0a-243a-4469-b9f1-d6d838827080","Type":"ContainerDied","Data":"76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78"} Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.514726 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8669779bb4-bghj5" event={"ID":"522d7b0a-243a-4469-b9f1-d6d838827080","Type":"ContainerDied","Data":"7f33defc14e6c90e6984cdd94862896e8937f4c6b4b65489d3aef6947c80c6f0"} Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.514751 4754 scope.go:117] "RemoveContainer" containerID="76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.514769 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8669779bb4-bghj5" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.536840 4754 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" podUID="6653ae5b-194f-49b8-ad3c-0ee3b1612f64" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.550477 4754 scope.go:117] "RemoveContainer" containerID="30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.567946 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8669779bb4-bghj5"] Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.577763 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8669779bb4-bghj5"] Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.672508 4754 scope.go:117] "RemoveContainer" containerID="76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78" Feb 18 19:39:27 crc kubenswrapper[4754]: E0218 19:39:27.673423 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78\": container with ID starting with 76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78 not found: ID does not exist" containerID="76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.673455 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78"} err="failed to get container status \"76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78\": rpc error: code = NotFound desc = could not find container \"76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78\": container with ID starting with 76ecba34c4bb8be5d5369b3b679f32ad1fb79f3b61666e44d71cfe6f1518ae78 not found: ID does not exist" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.673477 4754 scope.go:117] "RemoveContainer" containerID="30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83" Feb 18 19:39:27 crc kubenswrapper[4754]: E0218 19:39:27.673694 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83\": container with ID starting with 30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83 not found: ID does not exist" containerID="30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.673710 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83"} err="failed to get container status \"30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83\": rpc error: code = NotFound desc = could not find container \"30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83\": container with ID starting with 30740e6b62f23104634c587affcd7f1907d60cd41503653455c75e86339bba83 not found: ID does not exist" Feb 18 19:39:27 crc kubenswrapper[4754]: I0218 19:39:27.875680 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:39:28 crc kubenswrapper[4754]: I0218 19:39:28.253725 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" path="/var/lib/kubelet/pods/522d7b0a-243a-4469-b9f1-d6d838827080/volumes" Feb 18 19:39:28 crc kubenswrapper[4754]: I0218 19:39:28.254378 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7" path="/var/lib/kubelet/pods/8ede1e3a-054c-49f1-aaf8-b0a2fc4fd9c7/volumes" Feb 18 19:39:28 crc kubenswrapper[4754]: I0218 19:39:28.528161 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9d0da334-a8b7-4c9f-94fd-5222d4d192a4","Type":"ContainerStarted","Data":"15f659a9865a3eda37358463bae1c29485a8494e0110c7ec8699a7d353cf3229"} Feb 18 19:39:28 crc kubenswrapper[4754]: I0218 19:39:28.559815 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.559791924 podStartE2EDuration="4.559791924s" podCreationTimestamp="2026-02-18 19:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:28.555406318 +0000 UTC m=+1271.005819114" watchObservedRunningTime="2026-02-18 19:39:28.559791924 +0000 UTC m=+1271.010204720" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.014456 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6766dcd7b5-n5xnp"] Feb 18 19:39:30 crc kubenswrapper[4754]: E0218 19:39:30.015384 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-log" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.015399 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-log" Feb 18 19:39:30 crc kubenswrapper[4754]: E0218 19:39:30.015413 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-api" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.015419 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-api" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.015592 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-api" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.015603 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="522d7b0a-243a-4469-b9f1-d6d838827080" containerName="placement-log" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.017591 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.024452 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.024730 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.025196 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.040512 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6766dcd7b5-n5xnp"] Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.074913 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-config-data\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.074962 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2625da-c499-4e0d-a13c-15aae4128a26-etc-swift\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.074991 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa2625da-c499-4e0d-a13c-15aae4128a26-run-httpd\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.075040 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-internal-tls-certs\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.075107 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-combined-ca-bundle\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.075163 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlk89\" (UniqueName: \"kubernetes.io/projected/aa2625da-c499-4e0d-a13c-15aae4128a26-kube-api-access-vlk89\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.075198 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa2625da-c499-4e0d-a13c-15aae4128a26-log-httpd\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.075215 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-public-tls-certs\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.178965 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-combined-ca-bundle\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179081 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlk89\" (UniqueName: \"kubernetes.io/projected/aa2625da-c499-4e0d-a13c-15aae4128a26-kube-api-access-vlk89\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179165 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa2625da-c499-4e0d-a13c-15aae4128a26-log-httpd\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179255 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-public-tls-certs\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179318 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-config-data\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179355 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2625da-c499-4e0d-a13c-15aae4128a26-etc-swift\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179455 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa2625da-c499-4e0d-a13c-15aae4128a26-run-httpd\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.179544 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-internal-tls-certs\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.182308 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.182787 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa2625da-c499-4e0d-a13c-15aae4128a26-log-httpd\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.194504 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa2625da-c499-4e0d-a13c-15aae4128a26-run-httpd\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.195748 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-combined-ca-bundle\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.197831 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aa2625da-c499-4e0d-a13c-15aae4128a26-etc-swift\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.207243 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlk89\" (UniqueName: \"kubernetes.io/projected/aa2625da-c499-4e0d-a13c-15aae4128a26-kube-api-access-vlk89\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.207698 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-internal-tls-certs\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.221125 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-config-data\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.234053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa2625da-c499-4e0d-a13c-15aae4128a26-public-tls-certs\") pod \"swift-proxy-6766dcd7b5-n5xnp\" (UID: \"aa2625da-c499-4e0d-a13c-15aae4128a26\") " pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.356903 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:30 crc kubenswrapper[4754]: I0218 19:39:30.988759 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6766dcd7b5-n5xnp"] Feb 18 19:39:31 crc kubenswrapper[4754]: W0218 19:39:31.008130 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa2625da_c499_4e0d_a13c_15aae4128a26.slice/crio-fbc67d30e93de9d9c6ddd44cdd785644dda49c57afab109f8b2aed0a64ac45a6 WatchSource:0}: Error finding container fbc67d30e93de9d9c6ddd44cdd785644dda49c57afab109f8b2aed0a64ac45a6: Status 404 returned error can't find the container with id fbc67d30e93de9d9c6ddd44cdd785644dda49c57afab109f8b2aed0a64ac45a6 Feb 18 19:39:31 crc kubenswrapper[4754]: I0218 19:39:31.583180 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" event={"ID":"aa2625da-c499-4e0d-a13c-15aae4128a26","Type":"ContainerStarted","Data":"5087b280c6efbe3a511085a817b997d4094416457579ddcab40f7abc7089b8ad"} Feb 18 19:39:31 crc kubenswrapper[4754]: I0218 19:39:31.583683 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" event={"ID":"aa2625da-c499-4e0d-a13c-15aae4128a26","Type":"ContainerStarted","Data":"8af38f0f85889c6fce34c5dbb55946312f5b25057465fdc72b95bbc8a843db00"} Feb 18 19:39:31 crc kubenswrapper[4754]: I0218 19:39:31.583699 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" event={"ID":"aa2625da-c499-4e0d-a13c-15aae4128a26","Type":"ContainerStarted","Data":"fbc67d30e93de9d9c6ddd44cdd785644dda49c57afab109f8b2aed0a64ac45a6"} Feb 18 19:39:31 crc kubenswrapper[4754]: I0218 19:39:31.583740 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:31 crc kubenswrapper[4754]: I0218 19:39:31.583775 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:31 crc kubenswrapper[4754]: I0218 19:39:31.617727 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" podStartSLOduration=2.617702592 podStartE2EDuration="2.617702592s" podCreationTimestamp="2026-02-18 19:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:31.610603984 +0000 UTC m=+1274.061016780" watchObservedRunningTime="2026-02-18 19:39:31.617702592 +0000 UTC m=+1274.068115388" Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.305768 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.306407 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-central-agent" containerID="cri-o://25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f" gracePeriod=30 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.306896 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="proxy-httpd" containerID="cri-o://5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412" gracePeriod=30 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.306950 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="sg-core" containerID="cri-o://b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16" gracePeriod=30 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.306990 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-notification-agent" containerID="cri-o://45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1" gracePeriod=30 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.611271 4754 generic.go:334] "Generic (PLEG): container finished" podID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerID="5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412" exitCode=0 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.611327 4754 generic.go:334] "Generic (PLEG): container finished" podID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerID="b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16" exitCode=2 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.611820 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerDied","Data":"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412"} Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.611918 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerDied","Data":"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16"} Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.806608 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c78c979c7-gflt2" Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.892363 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c6fb7f68-h72q7"] Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.892664 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c6fb7f68-h72q7" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-api" containerID="cri-o://15243c08b4401352a5c40991de1875e41dd23e3ad2879feee8a08de5d3e7175a" gracePeriod=30 Feb 18 19:39:32 crc kubenswrapper[4754]: I0218 19:39:32.893134 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7c6fb7f68-h72q7" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-httpd" containerID="cri-o://34b238d4c32258d2339dd3df201324c5556c452a2d58b11510e0b1c540ce9f9a" gracePeriod=30 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.256468 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.257003 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-log" containerID="cri-o://78930d0332c4e8c6b510e6585a390d7c76c5955b44bd1a2e3bba25ed3f96d339" gracePeriod=30 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.257501 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-httpd" containerID="cri-o://9eec0a4f901921f692b3e6dbe60ef1b6a448c270eced53b342cf297ad53381d4" gracePeriod=30 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.635249 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.671602 4754 generic.go:334] "Generic (PLEG): container finished" podID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerID="7ad6d935c31c4a41096116f9831315cd554226731e874722795ee996818ccd68" exitCode=137 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.672295 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7766589b-gh94d" event={"ID":"c99f043f-84fb-4825-8ba7-c918263e6c7f","Type":"ContainerDied","Data":"7ad6d935c31c4a41096116f9831315cd554226731e874722795ee996818ccd68"} Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.686525 4754 generic.go:334] "Generic (PLEG): container finished" podID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerID="78930d0332c4e8c6b510e6585a390d7c76c5955b44bd1a2e3bba25ed3f96d339" exitCode=143 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.686719 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f30c68d8-931c-4aca-a099-c7c969a92b61","Type":"ContainerDied","Data":"78930d0332c4e8c6b510e6585a390d7c76c5955b44bd1a2e3bba25ed3f96d339"} Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.692550 4754 generic.go:334] "Generic (PLEG): container finished" podID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerID="34b238d4c32258d2339dd3df201324c5556c452a2d58b11510e0b1c540ce9f9a" exitCode=0 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.692851 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6fb7f68-h72q7" event={"ID":"5de1542f-0d57-4a6e-bfac-7557e6dda66e","Type":"ContainerDied","Data":"34b238d4c32258d2339dd3df201324c5556c452a2d58b11510e0b1c540ce9f9a"} Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698462 4754 generic.go:334] "Generic (PLEG): container finished" podID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerID="45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1" exitCode=0 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698499 4754 generic.go:334] "Generic (PLEG): container finished" podID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerID="25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f" exitCode=0 Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698544 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerDied","Data":"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1"} Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698611 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerDied","Data":"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f"} Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698623 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51d3b1e7-508a-4e64-b50d-fd7b03727405","Type":"ContainerDied","Data":"c530e9f889dd33edf611e7ea64e9905c88f5cf1edac68fdd3fc13bf3195c1807"} Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698647 4754 scope.go:117] "RemoveContainer" containerID="5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.698874 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.711760 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-config-data\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.711827 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-sg-core-conf-yaml\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.711877 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckfmd\" (UniqueName: \"kubernetes.io/projected/51d3b1e7-508a-4e64-b50d-fd7b03727405-kube-api-access-ckfmd\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.712031 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-log-httpd\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.712046 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-scripts\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.712111 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-run-httpd\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.712184 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-combined-ca-bundle\") pod \"51d3b1e7-508a-4e64-b50d-fd7b03727405\" (UID: \"51d3b1e7-508a-4e64-b50d-fd7b03727405\") " Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.713654 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.714282 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.721288 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-scripts" (OuterVolumeSpecName: "scripts") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.743799 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d3b1e7-508a-4e64-b50d-fd7b03727405-kube-api-access-ckfmd" (OuterVolumeSpecName: "kube-api-access-ckfmd") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "kube-api-access-ckfmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.768215 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.786279 4754 scope.go:117] "RemoveContainer" containerID="b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.817187 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckfmd\" (UniqueName: \"kubernetes.io/projected/51d3b1e7-508a-4e64-b50d-fd7b03727405-kube-api-access-ckfmd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.817261 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.817275 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.817320 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51d3b1e7-508a-4e64-b50d-fd7b03727405-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.817332 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.866933 4754 scope.go:117] "RemoveContainer" containerID="45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.891883 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-config-data" (OuterVolumeSpecName: "config-data") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.912451 4754 scope.go:117] "RemoveContainer" containerID="25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.919035 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.941358 4754 scope.go:117] "RemoveContainer" containerID="5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412" Feb 18 19:39:33 crc kubenswrapper[4754]: E0218 19:39:33.941770 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412\": container with ID starting with 5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412 not found: ID does not exist" containerID="5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.941811 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412"} err="failed to get container status \"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412\": rpc error: code = NotFound desc = could not find container \"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412\": container with ID starting with 5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412 not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.941835 4754 scope.go:117] "RemoveContainer" containerID="b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16" Feb 18 19:39:33 crc kubenswrapper[4754]: E0218 19:39:33.942050 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16\": container with ID starting with b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16 not found: ID does not exist" containerID="b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.942075 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16"} err="failed to get container status \"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16\": rpc error: code = NotFound desc = could not find container \"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16\": container with ID starting with b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16 not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.942093 4754 scope.go:117] "RemoveContainer" containerID="45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1" Feb 18 19:39:33 crc kubenswrapper[4754]: E0218 19:39:33.942570 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1\": container with ID starting with 45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1 not found: ID does not exist" containerID="45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.942624 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1"} err="failed to get container status \"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1\": rpc error: code = NotFound desc = could not find container \"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1\": container with ID starting with 45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1 not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.942663 4754 scope.go:117] "RemoveContainer" containerID="25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f" Feb 18 19:39:33 crc kubenswrapper[4754]: E0218 19:39:33.943077 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f\": container with ID starting with 25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f not found: ID does not exist" containerID="25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943099 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f"} err="failed to get container status \"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f\": rpc error: code = NotFound desc = could not find container \"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f\": container with ID starting with 25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943113 4754 scope.go:117] "RemoveContainer" containerID="5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943356 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412"} err="failed to get container status \"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412\": rpc error: code = NotFound desc = could not find container \"5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412\": container with ID starting with 5a18ed531b6cb84980cb34c1cc8c94cba1f5ec0f26337617816ec25bd055f412 not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943374 4754 scope.go:117] "RemoveContainer" containerID="b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943606 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16"} err="failed to get container status \"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16\": rpc error: code = NotFound desc = could not find container \"b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16\": container with ID starting with b71252f9f0ef4180c959f903f18de31a93551b93921d24302d1e2c7c298ddd16 not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943624 4754 scope.go:117] "RemoveContainer" containerID="45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943796 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1"} err="failed to get container status \"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1\": rpc error: code = NotFound desc = could not find container \"45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1\": container with ID starting with 45221b88190f28d3c52b8fe1b5fa6f857524acb5eb639f56c1464f21d56344d1 not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943814 4754 scope.go:117] "RemoveContainer" containerID="25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.943970 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f"} err="failed to get container status \"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f\": rpc error: code = NotFound desc = could not find container \"25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f\": container with ID starting with 25129fe57081e00216e56156b475e491b39dc2424349bd14215ea85c3daf662f not found: ID does not exist" Feb 18 19:39:33 crc kubenswrapper[4754]: I0218 19:39:33.947851 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51d3b1e7-508a-4e64-b50d-fd7b03727405" (UID: "51d3b1e7-508a-4e64-b50d-fd7b03727405"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.021233 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d3b1e7-508a-4e64-b50d-fd7b03727405-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.049030 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.060651 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.081904 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:34 crc kubenswrapper[4754]: E0218 19:39:34.082568 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-notification-agent" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.082645 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-notification-agent" Feb 18 19:39:34 crc kubenswrapper[4754]: E0218 19:39:34.082711 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="proxy-httpd" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.082780 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="proxy-httpd" Feb 18 19:39:34 crc kubenswrapper[4754]: E0218 19:39:34.082850 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="sg-core" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.082900 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="sg-core" Feb 18 19:39:34 crc kubenswrapper[4754]: E0218 19:39:34.082969 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-central-agent" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.083021 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-central-agent" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.083284 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="proxy-httpd" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.083375 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-notification-agent" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.083436 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="ceilometer-central-agent" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.083493 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" containerName="sg-core" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.085374 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.089079 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.089532 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.102207 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.223894 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d3b1e7-508a-4e64-b50d-fd7b03727405" path="/var/lib/kubelet/pods/51d3b1e7-508a-4e64-b50d-fd7b03727405/volumes" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.224670 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-config-data\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.224719 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-run-httpd\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.224782 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-scripts\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.224891 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-log-httpd\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.225046 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.225111 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjxg\" (UniqueName: \"kubernetes.io/projected/72b4e9c4-d773-4dde-b38e-378bc4dd1277-kube-api-access-bkjxg\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.225358 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.327828 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.328405 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-config-data\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.328450 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-run-httpd\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.328530 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-scripts\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.328556 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-log-httpd\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.328640 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.328692 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjxg\" (UniqueName: \"kubernetes.io/projected/72b4e9c4-d773-4dde-b38e-378bc4dd1277-kube-api-access-bkjxg\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.329550 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-log-httpd\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.330205 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-run-httpd\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.335843 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-scripts\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.337293 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-config-data\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.337775 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.338054 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.350880 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjxg\" (UniqueName: \"kubernetes.io/projected/72b4e9c4-d773-4dde-b38e-378bc4dd1277-kube-api-access-bkjxg\") pod \"ceilometer-0\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.413165 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.817602 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7766589b-gh94d" event={"ID":"c99f043f-84fb-4825-8ba7-c918263e6c7f","Type":"ContainerStarted","Data":"4caf5fa67a1858baaa7b42c6d82874936996ec2e95404203f3a969cd5907e881"} Feb 18 19:39:34 crc kubenswrapper[4754]: I0218 19:39:34.968731 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:35 crc kubenswrapper[4754]: I0218 19:39:35.547359 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:39:35 crc kubenswrapper[4754]: I0218 19:39:35.583415 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:36 crc kubenswrapper[4754]: I0218 19:39:36.839988 4754 generic.go:334] "Generic (PLEG): container finished" podID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerID="15243c08b4401352a5c40991de1875e41dd23e3ad2879feee8a08de5d3e7175a" exitCode=0 Feb 18 19:39:36 crc kubenswrapper[4754]: I0218 19:39:36.840048 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6fb7f68-h72q7" event={"ID":"5de1542f-0d57-4a6e-bfac-7557e6dda66e","Type":"ContainerDied","Data":"15243c08b4401352a5c40991de1875e41dd23e3ad2879feee8a08de5d3e7175a"} Feb 18 19:39:36 crc kubenswrapper[4754]: I0218 19:39:36.841959 4754 generic.go:334] "Generic (PLEG): container finished" podID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerID="9eec0a4f901921f692b3e6dbe60ef1b6a448c270eced53b342cf297ad53381d4" exitCode=0 Feb 18 19:39:36 crc kubenswrapper[4754]: I0218 19:39:36.842011 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f30c68d8-931c-4aca-a099-c7c969a92b61","Type":"ContainerDied","Data":"9eec0a4f901921f692b3e6dbe60ef1b6a448c270eced53b342cf297ad53381d4"} Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.097551 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.097918 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.718444 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.718717 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-log" containerID="cri-o://e2a3c2b5d2fcfdf790e283b783371fee59decad7964454a6bccb17e361e0e1ba" gracePeriod=30 Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.718831 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-httpd" containerID="cri-o://4274789014a180f7b040256175cf68d15c60e81fafb8d63ab7f9c66b59be702b" gracePeriod=30 Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.878407 4754 generic.go:334] "Generic (PLEG): container finished" podID="e693768a-995f-412b-bede-020e44d17d03" containerID="e2a3c2b5d2fcfdf790e283b783371fee59decad7964454a6bccb17e361e0e1ba" exitCode=143 Feb 18 19:39:38 crc kubenswrapper[4754]: I0218 19:39:38.878476 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e693768a-995f-412b-bede-020e44d17d03","Type":"ContainerDied","Data":"e2a3c2b5d2fcfdf790e283b783371fee59decad7964454a6bccb17e361e0e1ba"} Feb 18 19:39:40 crc kubenswrapper[4754]: I0218 19:39:40.368379 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:40 crc kubenswrapper[4754]: I0218 19:39:40.371049 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" Feb 18 19:39:40 crc kubenswrapper[4754]: I0218 19:39:40.512914 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": dial tcp 10.217.0.169:9292: connect: connection refused" Feb 18 19:39:40 crc kubenswrapper[4754]: I0218 19:39:40.513263 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": dial tcp 10.217.0.169:9292: connect: connection refused" Feb 18 19:39:41 crc kubenswrapper[4754]: I0218 19:39:41.929806 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": read tcp 10.217.0.2:39024->10.217.0.170:9292: read: connection reset by peer" Feb 18 19:39:41 crc kubenswrapper[4754]: I0218 19:39:41.933539 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.170:9292/healthcheck\": read tcp 10.217.0.2:39026->10.217.0.170:9292: read: connection reset by peer" Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.447915 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.868110 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.935826 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.936447 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f30c68d8-931c-4aca-a099-c7c969a92b61","Type":"ContainerDied","Data":"a620fefac7b3f7a94ab0074525054a8a7094c2df3b973120511557bcbc62fab5"} Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.936493 4754 scope.go:117] "RemoveContainer" containerID="9eec0a4f901921f692b3e6dbe60ef1b6a448c270eced53b342cf297ad53381d4" Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.936646 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.939610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6653ae5b-194f-49b8-ad3c-0ee3b1612f64","Type":"ContainerStarted","Data":"a680ae36d52442738124ac0da25180b9b23bd1c5bf89b3efeccc865a19bc050e"} Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.942870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerStarted","Data":"d246defec5146ab2d9394a5c9df1f282c7f074babdb9b30c2b9e2d1fbfca0625"} Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.950786 4754 generic.go:334] "Generic (PLEG): container finished" podID="e693768a-995f-412b-bede-020e44d17d03" containerID="4274789014a180f7b040256175cf68d15c60e81fafb8d63ab7f9c66b59be702b" exitCode=0 Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.951024 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e693768a-995f-412b-bede-020e44d17d03","Type":"ContainerDied","Data":"4274789014a180f7b040256175cf68d15c60e81fafb8d63ab7f9c66b59be702b"} Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.956661 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c6fb7f68-h72q7" event={"ID":"5de1542f-0d57-4a6e-bfac-7557e6dda66e","Type":"ContainerDied","Data":"0fdfd1421f46796e81fcaded9c1195eddebe168bc6e88aca8fa1c793a20f0d24"} Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.956865 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c6fb7f68-h72q7" Feb 18 19:39:42 crc kubenswrapper[4754]: I0218 19:39:42.993502 4754 scope.go:117] "RemoveContainer" containerID="78930d0332c4e8c6b510e6585a390d7c76c5955b44bd1a2e3bba25ed3f96d339" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.038163 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.038632229 podStartE2EDuration="18.038122886s" podCreationTimestamp="2026-02-18 19:39:25 +0000 UTC" firstStartedPulling="2026-02-18 19:39:26.515278153 +0000 UTC m=+1268.965690949" lastFinishedPulling="2026-02-18 19:39:42.51476881 +0000 UTC m=+1284.965181606" observedRunningTime="2026-02-18 19:39:43.003959935 +0000 UTC m=+1285.454372731" watchObservedRunningTime="2026-02-18 19:39:43.038122886 +0000 UTC m=+1285.488535682" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.041655 4754 scope.go:117] "RemoveContainer" containerID="34b238d4c32258d2339dd3df201324c5556c452a2d58b11510e0b1c540ce9f9a" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.062812 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.062919 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-httpd-config\") pod \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063041 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-config-data\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063114 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-combined-ca-bundle\") pod \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063163 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-combined-ca-bundle\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063191 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7smpj\" (UniqueName: \"kubernetes.io/projected/5de1542f-0d57-4a6e-bfac-7557e6dda66e-kube-api-access-7smpj\") pod \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063214 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-scripts\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063250 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-config\") pod \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063276 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-httpd-run\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063315 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-logs\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063356 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-ovndb-tls-certs\") pod \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\" (UID: \"5de1542f-0d57-4a6e-bfac-7557e6dda66e\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063382 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz69w\" (UniqueName: \"kubernetes.io/projected/f30c68d8-931c-4aca-a099-c7c969a92b61-kube-api-access-xz69w\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.063416 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-internal-tls-certs\") pod \"f30c68d8-931c-4aca-a099-c7c969a92b61\" (UID: \"f30c68d8-931c-4aca-a099-c7c969a92b61\") " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.066621 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.069370 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.069743 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-logs" (OuterVolumeSpecName: "logs") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.093492 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5de1542f-0d57-4a6e-bfac-7557e6dda66e" (UID: "5de1542f-0d57-4a6e-bfac-7557e6dda66e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.093873 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30c68d8-931c-4aca-a099-c7c969a92b61-kube-api-access-xz69w" (OuterVolumeSpecName: "kube-api-access-xz69w") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "kube-api-access-xz69w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.098346 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-scripts" (OuterVolumeSpecName: "scripts") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.112798 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de1542f-0d57-4a6e-bfac-7557e6dda66e-kube-api-access-7smpj" (OuterVolumeSpecName: "kube-api-access-7smpj") pod "5de1542f-0d57-4a6e-bfac-7557e6dda66e" (UID: "5de1542f-0d57-4a6e-bfac-7557e6dda66e"). InnerVolumeSpecName "kube-api-access-7smpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.130785 4754 scope.go:117] "RemoveContainer" containerID="15243c08b4401352a5c40991de1875e41dd23e3ad2879feee8a08de5d3e7175a" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.131048 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.149869 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.150791 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f7766589b-gh94d" podUID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.151070 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.154936 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-config-data" (OuterVolumeSpecName: "config-data") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166842 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166869 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30c68d8-931c-4aca-a099-c7c969a92b61-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166879 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz69w\" (UniqueName: \"kubernetes.io/projected/f30c68d8-931c-4aca-a099-c7c969a92b61-kube-api-access-xz69w\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166916 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166926 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166935 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166945 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166957 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7smpj\" (UniqueName: \"kubernetes.io/projected/5de1542f-0d57-4a6e-bfac-7557e6dda66e-kube-api-access-7smpj\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166965 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.166967 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f30c68d8-931c-4aca-a099-c7c969a92b61" (UID: "f30c68d8-931c-4aca-a099-c7c969a92b61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.172243 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5de1542f-0d57-4a6e-bfac-7557e6dda66e" (UID: "5de1542f-0d57-4a6e-bfac-7557e6dda66e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.197928 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-config" (OuterVolumeSpecName: "config") pod "5de1542f-0d57-4a6e-bfac-7557e6dda66e" (UID: "5de1542f-0d57-4a6e-bfac-7557e6dda66e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.200602 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.224848 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5de1542f-0d57-4a6e-bfac-7557e6dda66e" (UID: "5de1542f-0d57-4a6e-bfac-7557e6dda66e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.270063 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.270092 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.270102 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.270112 4754 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de1542f-0d57-4a6e-bfac-7557e6dda66e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.270121 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f30c68d8-931c-4aca-a099-c7c969a92b61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.292020 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.310855 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.324640 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7c6fb7f68-h72q7"] Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.337270 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:39:43 crc kubenswrapper[4754]: E0218 19:39:43.338036 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-httpd" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338058 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-httpd" Feb 18 19:39:43 crc kubenswrapper[4754]: E0218 19:39:43.338072 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-api" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338080 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-api" Feb 18 19:39:43 crc kubenswrapper[4754]: E0218 19:39:43.338093 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-log" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338100 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-log" Feb 18 19:39:43 crc kubenswrapper[4754]: E0218 19:39:43.338133 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-httpd" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338161 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-httpd" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338397 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-httpd" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338408 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-httpd" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338426 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" containerName="neutron-api" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.338440 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" containerName="glance-log" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.340113 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.342974 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.343227 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.349378 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7c6fb7f68-h72q7"] Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.356615 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.475545 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476546 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476593 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqgv\" (UniqueName: \"kubernetes.io/projected/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-kube-api-access-sqqgv\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476625 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476654 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476687 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476727 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.476760 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.578471 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.578518 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.578554 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqqgv\" (UniqueName: \"kubernetes.io/projected/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-kube-api-access-sqqgv\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.578572 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.578594 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.579325 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.579613 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.579935 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.579995 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.580035 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.580609 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.584329 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.586719 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.600386 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.601034 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.602133 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqqgv\" (UniqueName: \"kubernetes.io/projected/b49c6831-55db-46b9-8f1b-0c2f00beb3d7-kube-api-access-sqqgv\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.656924 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b49c6831-55db-46b9-8f1b-0c2f00beb3d7\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:39:43 crc kubenswrapper[4754]: I0218 19:39:43.744702 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.060627 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerStarted","Data":"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9"} Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.143476 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.202337 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-config-data\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.202785 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.202813 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-combined-ca-bundle\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.202851 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-scripts\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.202947 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4gpg\" (UniqueName: \"kubernetes.io/projected/e693768a-995f-412b-bede-020e44d17d03-kube-api-access-m4gpg\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.203007 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-public-tls-certs\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.203167 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-httpd-run\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.203310 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-logs\") pod \"e693768a-995f-412b-bede-020e44d17d03\" (UID: \"e693768a-995f-412b-bede-020e44d17d03\") " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.206178 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.206421 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-logs" (OuterVolumeSpecName: "logs") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.211929 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.225022 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e693768a-995f-412b-bede-020e44d17d03-kube-api-access-m4gpg" (OuterVolumeSpecName: "kube-api-access-m4gpg") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "kube-api-access-m4gpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.225162 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-scripts" (OuterVolumeSpecName: "scripts") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.238153 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de1542f-0d57-4a6e-bfac-7557e6dda66e" path="/var/lib/kubelet/pods/5de1542f-0d57-4a6e-bfac-7557e6dda66e/volumes" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.239325 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30c68d8-931c-4aca-a099-c7c969a92b61" path="/var/lib/kubelet/pods/f30c68d8-931c-4aca-a099-c7c969a92b61/volumes" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.279187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.292232 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.303331 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-config-data" (OuterVolumeSpecName: "config-data") pod "e693768a-995f-412b-bede-020e44d17d03" (UID: "e693768a-995f-412b-bede-020e44d17d03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306486 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306539 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306551 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306564 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306576 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4gpg\" (UniqueName: \"kubernetes.io/projected/e693768a-995f-412b-bede-020e44d17d03-kube-api-access-m4gpg\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306588 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e693768a-995f-412b-bede-020e44d17d03-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306596 4754 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.306608 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e693768a-995f-412b-bede-020e44d17d03-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.337587 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.381083 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:39:44 crc kubenswrapper[4754]: I0218 19:39:44.408859 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.088628 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerStarted","Data":"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb"} Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.089896 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerStarted","Data":"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb"} Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.091666 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b49c6831-55db-46b9-8f1b-0c2f00beb3d7","Type":"ContainerStarted","Data":"5a680acb6fba617c4185985542747efac047c87a4563db241d66649eb0b78c70"} Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.091712 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b49c6831-55db-46b9-8f1b-0c2f00beb3d7","Type":"ContainerStarted","Data":"0ac301f352d91e5af3bdbdd213d3acd9b3c39ae304b6c88c9ec2831d0c1080a0"} Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.098511 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e693768a-995f-412b-bede-020e44d17d03","Type":"ContainerDied","Data":"aa2998bcd8bc7c139b359dc741f6cd89a1e0306a553304d4b348b7c2db6e6122"} Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.098565 4754 scope.go:117] "RemoveContainer" containerID="4274789014a180f7b040256175cf68d15c60e81fafb8d63ab7f9c66b59be702b" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.098679 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.158444 4754 scope.go:117] "RemoveContainer" containerID="e2a3c2b5d2fcfdf790e283b783371fee59decad7964454a6bccb17e361e0e1ba" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.193420 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.212192 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.232265 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:39:45 crc kubenswrapper[4754]: E0218 19:39:45.232709 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-log" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.232724 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-log" Feb 18 19:39:45 crc kubenswrapper[4754]: E0218 19:39:45.232740 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-httpd" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.232747 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-httpd" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.232928 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-httpd" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.232944 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e693768a-995f-412b-bede-020e44d17d03" containerName="glance-log" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.234098 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.239997 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.243710 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.247208 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.333468 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7bd2fbb-5e77-4418-96ca-96c900250a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.333881 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.333926 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bd2fbb-5e77-4418-96ca-96c900250a75-logs\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.333969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.334026 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.334084 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.334207 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.334319 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcbz\" (UniqueName: \"kubernetes.io/projected/e7bd2fbb-5e77-4418-96ca-96c900250a75-kube-api-access-kmcbz\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436212 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436268 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bd2fbb-5e77-4418-96ca-96c900250a75-logs\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436299 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436331 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436361 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436414 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436472 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcbz\" (UniqueName: \"kubernetes.io/projected/e7bd2fbb-5e77-4418-96ca-96c900250a75-kube-api-access-kmcbz\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.436494 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7bd2fbb-5e77-4418-96ca-96c900250a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.437539 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e7bd2fbb-5e77-4418-96ca-96c900250a75-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.437618 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.443479 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7bd2fbb-5e77-4418-96ca-96c900250a75-logs\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.445886 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.458704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-config-data\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.460294 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-scripts\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.471103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcbz\" (UniqueName: \"kubernetes.io/projected/e7bd2fbb-5e77-4418-96ca-96c900250a75-kube-api-access-kmcbz\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.471218 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7bd2fbb-5e77-4418-96ca-96c900250a75-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.506806 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e7bd2fbb-5e77-4418-96ca-96c900250a75\") " pod="openstack/glance-default-external-api-0" Feb 18 19:39:45 crc kubenswrapper[4754]: I0218 19:39:45.623363 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:39:46 crc kubenswrapper[4754]: I0218 19:39:46.116701 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b49c6831-55db-46b9-8f1b-0c2f00beb3d7","Type":"ContainerStarted","Data":"5f567caf2e74c302eec1965c0a421bdc575ffd81ef96c198d53bff76c07968f2"} Feb 18 19:39:46 crc kubenswrapper[4754]: I0218 19:39:46.166759 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.166731898 podStartE2EDuration="3.166731898s" podCreationTimestamp="2026-02-18 19:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:46.141484221 +0000 UTC m=+1288.591897017" watchObservedRunningTime="2026-02-18 19:39:46.166731898 +0000 UTC m=+1288.617144694" Feb 18 19:39:46 crc kubenswrapper[4754]: I0218 19:39:46.243533 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e693768a-995f-412b-bede-020e44d17d03" path="/var/lib/kubelet/pods/e693768a-995f-412b-bede-020e44d17d03/volumes" Feb 18 19:39:46 crc kubenswrapper[4754]: I0218 19:39:46.308297 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:39:46 crc kubenswrapper[4754]: W0218 19:39:46.313228 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7bd2fbb_5e77_4418_96ca_96c900250a75.slice/crio-fca616f3d77e91e27b1e2b67579234712724f7df29b25d8edf06a16d6da7e1a9 WatchSource:0}: Error finding container fca616f3d77e91e27b1e2b67579234712724f7df29b25d8edf06a16d6da7e1a9: Status 404 returned error can't find the container with id fca616f3d77e91e27b1e2b67579234712724f7df29b25d8edf06a16d6da7e1a9 Feb 18 19:39:47 crc kubenswrapper[4754]: I0218 19:39:47.132739 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e7bd2fbb-5e77-4418-96ca-96c900250a75","Type":"ContainerStarted","Data":"fca616f3d77e91e27b1e2b67579234712724f7df29b25d8edf06a16d6da7e1a9"} Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.147374 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerStarted","Data":"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead"} Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.147842 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.147655 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="proxy-httpd" containerID="cri-o://5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" gracePeriod=30 Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.147612 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-central-agent" containerID="cri-o://9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" gracePeriod=30 Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.147722 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-notification-agent" containerID="cri-o://89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" gracePeriod=30 Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.147742 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="sg-core" containerID="cri-o://902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" gracePeriod=30 Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.154710 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e7bd2fbb-5e77-4418-96ca-96c900250a75","Type":"ContainerStarted","Data":"62cea83e9083bc7868acf4c9930baff20c1af5db4522d9afcda97f54945c7677"} Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.154764 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e7bd2fbb-5e77-4418-96ca-96c900250a75","Type":"ContainerStarted","Data":"1c295522ed52ead0295a2894aa5aba6414e58016bc1069e194405c42d1dbf9ec"} Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.215554 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.138719591 podStartE2EDuration="14.215521841s" podCreationTimestamp="2026-02-18 19:39:34 +0000 UTC" firstStartedPulling="2026-02-18 19:39:42.447635025 +0000 UTC m=+1284.898047821" lastFinishedPulling="2026-02-18 19:39:47.524437275 +0000 UTC m=+1289.974850071" observedRunningTime="2026-02-18 19:39:48.177583223 +0000 UTC m=+1290.627996019" watchObservedRunningTime="2026-02-18 19:39:48.215521841 +0000 UTC m=+1290.665934767" Feb 18 19:39:48 crc kubenswrapper[4754]: I0218 19:39:48.240060 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.2400325739999998 podStartE2EDuration="3.240032574s" podCreationTimestamp="2026-02-18 19:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:48.203296754 +0000 UTC m=+1290.653709550" watchObservedRunningTime="2026-02-18 19:39:48.240032574 +0000 UTC m=+1290.690445370" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.053277 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.175808 4754 generic.go:334] "Generic (PLEG): container finished" podID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" exitCode=0 Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.176233 4754 generic.go:334] "Generic (PLEG): container finished" podID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" exitCode=2 Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.176245 4754 generic.go:334] "Generic (PLEG): container finished" podID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" exitCode=0 Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.176255 4754 generic.go:334] "Generic (PLEG): container finished" podID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" exitCode=0 Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.175907 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.175936 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerDied","Data":"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead"} Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.179180 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerDied","Data":"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb"} Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.179196 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerDied","Data":"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb"} Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.179205 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerDied","Data":"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9"} Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.179213 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72b4e9c4-d773-4dde-b38e-378bc4dd1277","Type":"ContainerDied","Data":"d246defec5146ab2d9394a5c9df1f282c7f074babdb9b30c2b9e2d1fbfca0625"} Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.179234 4754 scope.go:117] "RemoveContainer" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.210268 4754 scope.go:117] "RemoveContainer" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.233816 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-run-httpd\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.233886 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-log-httpd\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.233929 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-scripts\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.234126 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-config-data\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.234238 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-sg-core-conf-yaml\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.234284 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-combined-ca-bundle\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.234345 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkjxg\" (UniqueName: \"kubernetes.io/projected/72b4e9c4-d773-4dde-b38e-378bc4dd1277-kube-api-access-bkjxg\") pod \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\" (UID: \"72b4e9c4-d773-4dde-b38e-378bc4dd1277\") " Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.234408 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.234479 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.235340 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.235362 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72b4e9c4-d773-4dde-b38e-378bc4dd1277-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.240824 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b4e9c4-d773-4dde-b38e-378bc4dd1277-kube-api-access-bkjxg" (OuterVolumeSpecName: "kube-api-access-bkjxg") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "kube-api-access-bkjxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.244019 4754 scope.go:117] "RemoveContainer" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.245443 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-scripts" (OuterVolumeSpecName: "scripts") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.269463 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.329830 4754 scope.go:117] "RemoveContainer" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.336840 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.336871 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkjxg\" (UniqueName: \"kubernetes.io/projected/72b4e9c4-d773-4dde-b38e-378bc4dd1277-kube-api-access-bkjxg\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.336884 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.364705 4754 scope.go:117] "RemoveContainer" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.367797 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": container with ID starting with 5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead not found: ID does not exist" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.368000 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead"} err="failed to get container status \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": rpc error: code = NotFound desc = could not find container \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": container with ID starting with 5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.368087 4754 scope.go:117] "RemoveContainer" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.369345 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": container with ID starting with 902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb not found: ID does not exist" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.369394 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb"} err="failed to get container status \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": rpc error: code = NotFound desc = could not find container \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": container with ID starting with 902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.369428 4754 scope.go:117] "RemoveContainer" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.369683 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": container with ID starting with 89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb not found: ID does not exist" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.369741 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb"} err="failed to get container status \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": rpc error: code = NotFound desc = could not find container \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": container with ID starting with 89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.369762 4754 scope.go:117] "RemoveContainer" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.370066 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": container with ID starting with 9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9 not found: ID does not exist" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.370173 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9"} err="failed to get container status \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": rpc error: code = NotFound desc = could not find container \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": container with ID starting with 9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9 not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.370307 4754 scope.go:117] "RemoveContainer" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.370629 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead"} err="failed to get container status \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": rpc error: code = NotFound desc = could not find container \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": container with ID starting with 5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.370708 4754 scope.go:117] "RemoveContainer" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.371016 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb"} err="failed to get container status \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": rpc error: code = NotFound desc = could not find container \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": container with ID starting with 902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.371039 4754 scope.go:117] "RemoveContainer" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.371607 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb"} err="failed to get container status \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": rpc error: code = NotFound desc = could not find container \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": container with ID starting with 89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.371723 4754 scope.go:117] "RemoveContainer" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.372100 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-config-data" (OuterVolumeSpecName: "config-data") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.372981 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9"} err="failed to get container status \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": rpc error: code = NotFound desc = could not find container \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": container with ID starting with 9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9 not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.373226 4754 scope.go:117] "RemoveContainer" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.374777 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead"} err="failed to get container status \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": rpc error: code = NotFound desc = could not find container \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": container with ID starting with 5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.374843 4754 scope.go:117] "RemoveContainer" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.378055 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb"} err="failed to get container status \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": rpc error: code = NotFound desc = could not find container \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": container with ID starting with 902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.378089 4754 scope.go:117] "RemoveContainer" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.378473 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb"} err="failed to get container status \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": rpc error: code = NotFound desc = could not find container \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": container with ID starting with 89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.378502 4754 scope.go:117] "RemoveContainer" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.378958 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9"} err="failed to get container status \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": rpc error: code = NotFound desc = could not find container \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": container with ID starting with 9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9 not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.379086 4754 scope.go:117] "RemoveContainer" containerID="5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.379491 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead"} err="failed to get container status \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": rpc error: code = NotFound desc = could not find container \"5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead\": container with ID starting with 5f315c139e55870b6335e820065fb83051e557960c256e6031b6b361e4146ead not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.379546 4754 scope.go:117] "RemoveContainer" containerID="902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.386411 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb"} err="failed to get container status \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": rpc error: code = NotFound desc = could not find container \"902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb\": container with ID starting with 902f1181685f5fef6460c8eca8a5c532977dec958873e8843f067d3a526848fb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.386560 4754 scope.go:117] "RemoveContainer" containerID="89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.386933 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb"} err="failed to get container status \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": rpc error: code = NotFound desc = could not find container \"89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb\": container with ID starting with 89fc1af2346b85415faf983e20f000b547ab7152eb997142a0f5fe634ac5c6cb not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.387035 4754 scope.go:117] "RemoveContainer" containerID="9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.387294 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9"} err="failed to get container status \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": rpc error: code = NotFound desc = could not find container \"9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9\": container with ID starting with 9fd6caffb6a47721b73479f7bd211073c7cef4c0da537e667f9302c60a3cc7f9 not found: ID does not exist" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.404283 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72b4e9c4-d773-4dde-b38e-378bc4dd1277" (UID: "72b4e9c4-d773-4dde-b38e-378bc4dd1277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.438271 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.438299 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b4e9c4-d773-4dde-b38e-378bc4dd1277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.514125 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.534215 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.542617 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.543115 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-notification-agent" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543161 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-notification-agent" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.543187 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="sg-core" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543196 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="sg-core" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.543210 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="proxy-httpd" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543230 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="proxy-httpd" Feb 18 19:39:49 crc kubenswrapper[4754]: E0218 19:39:49.543247 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-central-agent" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543254 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-central-agent" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543463 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="sg-core" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543495 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="proxy-httpd" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543511 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-notification-agent" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.543541 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" containerName="ceilometer-central-agent" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.546726 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.549690 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.549846 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.555151 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641339 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-kube-api-access-4n7m5\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641689 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641736 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-run-httpd\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641768 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-scripts\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641807 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-config-data\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641849 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.641883 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-log-httpd\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743515 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-kube-api-access-4n7m5\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743577 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743633 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-run-httpd\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743673 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-scripts\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743726 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-config-data\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743772 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.743816 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-log-httpd\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.744535 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-log-httpd\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.746096 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-run-httpd\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.751030 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-scripts\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.751079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.753826 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.757020 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-config-data\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.773899 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-kube-api-access-4n7m5\") pod \"ceilometer-0\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " pod="openstack/ceilometer-0" Feb 18 19:39:49 crc kubenswrapper[4754]: I0218 19:39:49.925392 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:50 crc kubenswrapper[4754]: I0218 19:39:50.276200 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b4e9c4-d773-4dde-b38e-378bc4dd1277" path="/var/lib/kubelet/pods/72b4e9c4-d773-4dde-b38e-378bc4dd1277/volumes" Feb 18 19:39:50 crc kubenswrapper[4754]: I0218 19:39:50.437588 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:50 crc kubenswrapper[4754]: W0218 19:39:50.440025 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ddb3b4_8b81_430d_92f2_f08efd0ad079.slice/crio-f0f715b13461b5fff3ed562bd1a7f97d3007c1c5fbf2f08b36de33151ef0cad8 WatchSource:0}: Error finding container f0f715b13461b5fff3ed562bd1a7f97d3007c1c5fbf2f08b36de33151ef0cad8: Status 404 returned error can't find the container with id f0f715b13461b5fff3ed562bd1a7f97d3007c1c5fbf2f08b36de33151ef0cad8 Feb 18 19:39:51 crc kubenswrapper[4754]: I0218 19:39:51.129319 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:39:51 crc kubenswrapper[4754]: I0218 19:39:51.130515 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="593f74ff-3d0a-4bf8-be87-e34fdda1b202" containerName="watcher-decision-engine" containerID="cri-o://9282ada21d27b554bdba907f8ac14600a68966fa6795e7e205c78deb368048a6" gracePeriod=30 Feb 18 19:39:51 crc kubenswrapper[4754]: I0218 19:39:51.201471 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerStarted","Data":"0355a20d3c1aacceac4186785b512554019aed304c92a441c3fe265b061eb664"} Feb 18 19:39:51 crc kubenswrapper[4754]: I0218 19:39:51.201527 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerStarted","Data":"f0f715b13461b5fff3ed562bd1a7f97d3007c1c5fbf2f08b36de33151ef0cad8"} Feb 18 19:39:52 crc kubenswrapper[4754]: I0218 19:39:52.080447 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:52 crc kubenswrapper[4754]: I0218 19:39:52.224555 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerStarted","Data":"d786971da706a369ab35733478ae81ea3f5111009b2444c35a3d8eb260812f3e"} Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.153422 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f7766589b-gh94d" podUID="c99f043f-84fb-4825-8ba7-c918263e6c7f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.162:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.162:8443: connect: connection refused" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.241655 4754 generic.go:334] "Generic (PLEG): container finished" podID="593f74ff-3d0a-4bf8-be87-e34fdda1b202" containerID="9282ada21d27b554bdba907f8ac14600a68966fa6795e7e205c78deb368048a6" exitCode=0 Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.241774 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"593f74ff-3d0a-4bf8-be87-e34fdda1b202","Type":"ContainerDied","Data":"9282ada21d27b554bdba907f8ac14600a68966fa6795e7e205c78deb368048a6"} Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.244677 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerStarted","Data":"a2b835e0eec7feaf9404bc38f4e20489f9313d02dffcbe7061517439b2776780"} Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.495193 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.650065 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-combined-ca-bundle\") pod \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.650332 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-config-data\") pod \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.650408 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbn6g\" (UniqueName: \"kubernetes.io/projected/593f74ff-3d0a-4bf8-be87-e34fdda1b202-kube-api-access-nbn6g\") pod \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.650522 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-custom-prometheus-ca\") pod \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.650589 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593f74ff-3d0a-4bf8-be87-e34fdda1b202-logs\") pod \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\" (UID: \"593f74ff-3d0a-4bf8-be87-e34fdda1b202\") " Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.651399 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/593f74ff-3d0a-4bf8-be87-e34fdda1b202-logs" (OuterVolumeSpecName: "logs") pod "593f74ff-3d0a-4bf8-be87-e34fdda1b202" (UID: "593f74ff-3d0a-4bf8-be87-e34fdda1b202"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.658956 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593f74ff-3d0a-4bf8-be87-e34fdda1b202-kube-api-access-nbn6g" (OuterVolumeSpecName: "kube-api-access-nbn6g") pod "593f74ff-3d0a-4bf8-be87-e34fdda1b202" (UID: "593f74ff-3d0a-4bf8-be87-e34fdda1b202"). InnerVolumeSpecName "kube-api-access-nbn6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.724340 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "593f74ff-3d0a-4bf8-be87-e34fdda1b202" (UID: "593f74ff-3d0a-4bf8-be87-e34fdda1b202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.729517 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-config-data" (OuterVolumeSpecName: "config-data") pod "593f74ff-3d0a-4bf8-be87-e34fdda1b202" (UID: "593f74ff-3d0a-4bf8-be87-e34fdda1b202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.738310 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "593f74ff-3d0a-4bf8-be87-e34fdda1b202" (UID: "593f74ff-3d0a-4bf8-be87-e34fdda1b202"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.745551 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.745618 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.753572 4754 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.753608 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/593f74ff-3d0a-4bf8-be87-e34fdda1b202-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.753618 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.753628 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/593f74ff-3d0a-4bf8-be87-e34fdda1b202-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.753638 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbn6g\" (UniqueName: \"kubernetes.io/projected/593f74ff-3d0a-4bf8-be87-e34fdda1b202-kube-api-access-nbn6g\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.796060 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:53 crc kubenswrapper[4754]: I0218 19:39:53.808008 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.257375 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.257489 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"593f74ff-3d0a-4bf8-be87-e34fdda1b202","Type":"ContainerDied","Data":"78c2cbf475b38e1ea8d7b255685f2bf7e91f10125d22acdc912875f1df8f89bf"} Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.257539 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.257559 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.257577 4754 scope.go:117] "RemoveContainer" containerID="9282ada21d27b554bdba907f8ac14600a68966fa6795e7e205c78deb368048a6" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.287634 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.296029 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.320066 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:39:54 crc kubenswrapper[4754]: E0218 19:39:54.320536 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593f74ff-3d0a-4bf8-be87-e34fdda1b202" containerName="watcher-decision-engine" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.320558 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="593f74ff-3d0a-4bf8-be87-e34fdda1b202" containerName="watcher-decision-engine" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.320780 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="593f74ff-3d0a-4bf8-be87-e34fdda1b202" containerName="watcher-decision-engine" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.321505 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.330985 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.337688 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.467332 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.467712 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.467785 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae925a80-c849-459a-95b0-a4e154fba313-logs\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.467892 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6h7b\" (UniqueName: \"kubernetes.io/projected/ae925a80-c849-459a-95b0-a4e154fba313-kube-api-access-k6h7b\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.468033 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.570493 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.570557 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae925a80-c849-459a-95b0-a4e154fba313-logs\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.570614 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6h7b\" (UniqueName: \"kubernetes.io/projected/ae925a80-c849-459a-95b0-a4e154fba313-kube-api-access-k6h7b\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.570697 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.570761 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.571123 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae925a80-c849-459a-95b0-a4e154fba313-logs\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.577897 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.579799 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.579950 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae925a80-c849-459a-95b0-a4e154fba313-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.595164 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6h7b\" (UniqueName: \"kubernetes.io/projected/ae925a80-c849-459a-95b0-a4e154fba313-kube-api-access-k6h7b\") pod \"watcher-decision-engine-0\" (UID: \"ae925a80-c849-459a-95b0-a4e154fba313\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:39:54 crc kubenswrapper[4754]: I0218 19:39:54.802681 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.273825 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerStarted","Data":"201eefbf58203ad5dfb5bf4236e0f998cc84dcc2a2fa917373c313494adaeb00"} Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.274154 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-central-agent" containerID="cri-o://0355a20d3c1aacceac4186785b512554019aed304c92a441c3fe265b061eb664" gracePeriod=30 Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.274473 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.274529 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="proxy-httpd" containerID="cri-o://201eefbf58203ad5dfb5bf4236e0f998cc84dcc2a2fa917373c313494adaeb00" gracePeriod=30 Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.274514 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="sg-core" containerID="cri-o://a2b835e0eec7feaf9404bc38f4e20489f9313d02dffcbe7061517439b2776780" gracePeriod=30 Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.274590 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-notification-agent" containerID="cri-o://d786971da706a369ab35733478ae81ea3f5111009b2444c35a3d8eb260812f3e" gracePeriod=30 Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.318014 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:39:55 crc kubenswrapper[4754]: W0218 19:39:55.321371 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae925a80_c849_459a_95b0_a4e154fba313.slice/crio-10acac30cd62b053d84fb3d18d58bef7603a26a7772c8c61b1815e08eee6e5bc WatchSource:0}: Error finding container 10acac30cd62b053d84fb3d18d58bef7603a26a7772c8c61b1815e08eee6e5bc: Status 404 returned error can't find the container with id 10acac30cd62b053d84fb3d18d58bef7603a26a7772c8c61b1815e08eee6e5bc Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.337507 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.447841045 podStartE2EDuration="6.337484869s" podCreationTimestamp="2026-02-18 19:39:49 +0000 UTC" firstStartedPulling="2026-02-18 19:39:50.443025429 +0000 UTC m=+1292.893438225" lastFinishedPulling="2026-02-18 19:39:54.332669253 +0000 UTC m=+1296.783082049" observedRunningTime="2026-02-18 19:39:55.322372513 +0000 UTC m=+1297.772785309" watchObservedRunningTime="2026-02-18 19:39:55.337484869 +0000 UTC m=+1297.787897665" Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.624366 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.624685 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.823066 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:39:55 crc kubenswrapper[4754]: I0218 19:39:55.846367 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.223638 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593f74ff-3d0a-4bf8-be87-e34fdda1b202" path="/var/lib/kubelet/pods/593f74ff-3d0a-4bf8-be87-e34fdda1b202/volumes" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287718 4754 generic.go:334] "Generic (PLEG): container finished" podID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerID="201eefbf58203ad5dfb5bf4236e0f998cc84dcc2a2fa917373c313494adaeb00" exitCode=0 Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287758 4754 generic.go:334] "Generic (PLEG): container finished" podID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerID="a2b835e0eec7feaf9404bc38f4e20489f9313d02dffcbe7061517439b2776780" exitCode=2 Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287774 4754 generic.go:334] "Generic (PLEG): container finished" podID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerID="d786971da706a369ab35733478ae81ea3f5111009b2444c35a3d8eb260812f3e" exitCode=0 Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287782 4754 generic.go:334] "Generic (PLEG): container finished" podID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerID="0355a20d3c1aacceac4186785b512554019aed304c92a441c3fe265b061eb664" exitCode=0 Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287839 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerDied","Data":"201eefbf58203ad5dfb5bf4236e0f998cc84dcc2a2fa917373c313494adaeb00"} Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287869 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerDied","Data":"a2b835e0eec7feaf9404bc38f4e20489f9313d02dffcbe7061517439b2776780"} Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287880 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerDied","Data":"d786971da706a369ab35733478ae81ea3f5111009b2444c35a3d8eb260812f3e"} Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.287890 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerDied","Data":"0355a20d3c1aacceac4186785b512554019aed304c92a441c3fe265b061eb664"} Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.289325 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ae925a80-c849-459a-95b0-a4e154fba313","Type":"ContainerStarted","Data":"d92dafb50209300a76138b8474a360345f02cce881c52577816b02508d592021"} Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.289369 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ae925a80-c849-459a-95b0-a4e154fba313","Type":"ContainerStarted","Data":"10acac30cd62b053d84fb3d18d58bef7603a26a7772c8c61b1815e08eee6e5bc"} Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.289490 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.289517 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.289961 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.290062 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.328235 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.32820706 podStartE2EDuration="2.32820706s" podCreationTimestamp="2026-02-18 19:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:56.312351652 +0000 UTC m=+1298.762764458" watchObservedRunningTime="2026-02-18 19:39:56.32820706 +0000 UTC m=+1298.778619856" Feb 18 19:39:56 crc kubenswrapper[4754]: I0218 19:39:56.988524 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037032 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-log-httpd\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037099 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-sg-core-conf-yaml\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037178 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-scripts\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037403 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-run-httpd\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037541 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-config-data\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037648 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-kube-api-access-4n7m5\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037673 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.037744 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-combined-ca-bundle\") pod \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\" (UID: \"e9ddb3b4-8b81-430d-92f2-f08efd0ad079\") " Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.038543 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.048316 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-scripts" (OuterVolumeSpecName: "scripts") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.048808 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.080389 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-kube-api-access-4n7m5" (OuterVolumeSpecName: "kube-api-access-4n7m5") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "kube-api-access-4n7m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.095951 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.144783 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.144823 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.144837 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.144850 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n7m5\" (UniqueName: \"kubernetes.io/projected/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-kube-api-access-4n7m5\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.188276 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.246728 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.273956 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-config-data" (OuterVolumeSpecName: "config-data") pod "e9ddb3b4-8b81-430d-92f2-f08efd0ad079" (UID: "e9ddb3b4-8b81-430d-92f2-f08efd0ad079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.302345 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9ddb3b4-8b81-430d-92f2-f08efd0ad079","Type":"ContainerDied","Data":"f0f715b13461b5fff3ed562bd1a7f97d3007c1c5fbf2f08b36de33151ef0cad8"} Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.302411 4754 scope.go:117] "RemoveContainer" containerID="201eefbf58203ad5dfb5bf4236e0f998cc84dcc2a2fa917373c313494adaeb00" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.302450 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.322699 4754 scope.go:117] "RemoveContainer" containerID="a2b835e0eec7feaf9404bc38f4e20489f9313d02dffcbe7061517439b2776780" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.348386 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9ddb3b4-8b81-430d-92f2-f08efd0ad079-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.352725 4754 scope.go:117] "RemoveContainer" containerID="d786971da706a369ab35733478ae81ea3f5111009b2444c35a3d8eb260812f3e" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.358422 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.384527 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.395936 4754 scope.go:117] "RemoveContainer" containerID="0355a20d3c1aacceac4186785b512554019aed304c92a441c3fe265b061eb664" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.401879 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:57 crc kubenswrapper[4754]: E0218 19:39:57.402441 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-central-agent" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402469 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-central-agent" Feb 18 19:39:57 crc kubenswrapper[4754]: E0218 19:39:57.402503 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="proxy-httpd" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402511 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="proxy-httpd" Feb 18 19:39:57 crc kubenswrapper[4754]: E0218 19:39:57.402523 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="sg-core" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402531 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="sg-core" Feb 18 19:39:57 crc kubenswrapper[4754]: E0218 19:39:57.402545 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-notification-agent" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402553 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-notification-agent" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402783 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="proxy-httpd" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402806 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="sg-core" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402819 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-central-agent" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.402841 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" containerName="ceilometer-notification-agent" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.405859 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.411888 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.412928 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.423233 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451468 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-log-httpd\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451517 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-scripts\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451553 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451603 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-config-data\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451639 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvt8\" (UniqueName: \"kubernetes.io/projected/704d85f0-74f3-4d98-9c2d-0f499ac3df31-kube-api-access-rcvt8\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451687 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.451705 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-run-httpd\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.480631 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.480754 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.553977 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.555052 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-config-data\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.555196 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcvt8\" (UniqueName: \"kubernetes.io/projected/704d85f0-74f3-4d98-9c2d-0f499ac3df31-kube-api-access-rcvt8\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.555299 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.555339 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-run-httpd\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.555432 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-log-httpd\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.555492 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-scripts\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.558050 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-run-httpd\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.559092 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-log-httpd\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.563042 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.563105 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.563581 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-config-data\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.563862 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-scripts\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.579425 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcvt8\" (UniqueName: \"kubernetes.io/projected/704d85f0-74f3-4d98-9c2d-0f499ac3df31-kube-api-access-rcvt8\") pod \"ceilometer-0\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " pod="openstack/ceilometer-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.609876 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:39:57 crc kubenswrapper[4754]: I0218 19:39:57.745811 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:39:58 crc kubenswrapper[4754]: I0218 19:39:58.227277 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ddb3b4-8b81-430d-92f2-f08efd0ad079" path="/var/lib/kubelet/pods/e9ddb3b4-8b81-430d-92f2-f08efd0ad079/volumes" Feb 18 19:39:58 crc kubenswrapper[4754]: I0218 19:39:58.243379 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:58 crc kubenswrapper[4754]: I0218 19:39:58.297186 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:39:58 crc kubenswrapper[4754]: I0218 19:39:58.818881 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:39:58 crc kubenswrapper[4754]: I0218 19:39:58.819322 4754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:39:58 crc kubenswrapper[4754]: I0218 19:39:58.836934 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:39:59 crc kubenswrapper[4754]: I0218 19:39:59.335632 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerStarted","Data":"f09d35f4f4133a1c72e06f166289cf18805bf05527e386a1bf8c2204e9211b87"} Feb 18 19:40:00 crc kubenswrapper[4754]: I0218 19:40:00.345333 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerStarted","Data":"a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc"} Feb 18 19:40:00 crc kubenswrapper[4754]: I0218 19:40:00.345901 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerStarted","Data":"668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8"} Feb 18 19:40:02 crc kubenswrapper[4754]: I0218 19:40:02.366277 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerStarted","Data":"d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b"} Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.388081 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerStarted","Data":"cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26"} Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.388716 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.389062 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="sg-core" containerID="cri-o://d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b" gracePeriod=30 Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.389169 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-notification-agent" containerID="cri-o://a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc" gracePeriod=30 Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.389108 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="proxy-httpd" containerID="cri-o://cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26" gracePeriod=30 Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.390374 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-central-agent" containerID="cri-o://668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8" gracePeriod=30 Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.426314 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.997084149 podStartE2EDuration="7.426267174s" podCreationTimestamp="2026-02-18 19:39:57 +0000 UTC" firstStartedPulling="2026-02-18 19:39:58.309856928 +0000 UTC m=+1300.760269724" lastFinishedPulling="2026-02-18 19:40:03.739039953 +0000 UTC m=+1306.189452749" observedRunningTime="2026-02-18 19:40:04.412847699 +0000 UTC m=+1306.863260505" watchObservedRunningTime="2026-02-18 19:40:04.426267174 +0000 UTC m=+1306.876679970" Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.804760 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:40:04 crc kubenswrapper[4754]: I0218 19:40:04.834095 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.352830 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.402386 4754 generic.go:334] "Generic (PLEG): container finished" podID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerID="cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26" exitCode=0 Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.402432 4754 generic.go:334] "Generic (PLEG): container finished" podID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerID="d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b" exitCode=2 Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.402446 4754 generic.go:334] "Generic (PLEG): container finished" podID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerID="a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc" exitCode=0 Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.402717 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerDied","Data":"cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26"} Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.402773 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerDied","Data":"d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b"} Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.402791 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerDied","Data":"a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc"} Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.403044 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:40:05 crc kubenswrapper[4754]: I0218 19:40:05.467045 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:40:07 crc kubenswrapper[4754]: I0218 19:40:07.511191 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f7766589b-gh94d" Feb 18 19:40:07 crc kubenswrapper[4754]: I0218 19:40:07.598633 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9577ccdb8-nfcx9"] Feb 18 19:40:07 crc kubenswrapper[4754]: I0218 19:40:07.598920 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon-log" containerID="cri-o://6a052b7efef88a77b09203dde939054683d4837942b9da7c71526e02fa3db66f" gracePeriod=30 Feb 18 19:40:07 crc kubenswrapper[4754]: I0218 19:40:07.599486 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" containerID="cri-o://88b802d23292dd5c618c19202d02440a41dec604bfd64271e8807d8dc39458ab" gracePeriod=30 Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.096736 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.097406 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.097574 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.098708 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96444caa3510b8204a97c50f5062d060301a59e158a321374c108effb01ab6a8"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.098965 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://96444caa3510b8204a97c50f5062d060301a59e158a321374c108effb01ab6a8" gracePeriod=600 Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.431565 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="96444caa3510b8204a97c50f5062d060301a59e158a321374c108effb01ab6a8" exitCode=0 Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.431623 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"96444caa3510b8204a97c50f5062d060301a59e158a321374c108effb01ab6a8"} Feb 18 19:40:08 crc kubenswrapper[4754]: I0218 19:40:08.431951 4754 scope.go:117] "RemoveContainer" containerID="c0026e2ecf3c88a72909f5c7c0de86e2f4abd80ac8afc7c18f8c5bf2f5f9229e" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.424385 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.448789 4754 generic.go:334] "Generic (PLEG): container finished" podID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerID="668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8" exitCode=0 Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.448857 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerDied","Data":"668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8"} Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.449293 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"704d85f0-74f3-4d98-9c2d-0f499ac3df31","Type":"ContainerDied","Data":"f09d35f4f4133a1c72e06f166289cf18805bf05527e386a1bf8c2204e9211b87"} Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.448880 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.449360 4754 scope.go:117] "RemoveContainer" containerID="cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.455327 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333"} Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.492594 4754 scope.go:117] "RemoveContainer" containerID="d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.514178 4754 scope.go:117] "RemoveContainer" containerID="a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.536096 4754 scope.go:117] "RemoveContainer" containerID="668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554090 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-config-data\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554188 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-log-httpd\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554237 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-sg-core-conf-yaml\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554288 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-run-httpd\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554382 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-scripts\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554473 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-combined-ca-bundle\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554503 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcvt8\" (UniqueName: \"kubernetes.io/projected/704d85f0-74f3-4d98-9c2d-0f499ac3df31-kube-api-access-rcvt8\") pod \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\" (UID: \"704d85f0-74f3-4d98-9c2d-0f499ac3df31\") " Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.554682 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.555306 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.556436 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.562080 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704d85f0-74f3-4d98-9c2d-0f499ac3df31-kube-api-access-rcvt8" (OuterVolumeSpecName: "kube-api-access-rcvt8") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "kube-api-access-rcvt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.564121 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-scripts" (OuterVolumeSpecName: "scripts") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.564807 4754 scope.go:117] "RemoveContainer" containerID="cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.565410 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26\": container with ID starting with cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26 not found: ID does not exist" containerID="cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.565495 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26"} err="failed to get container status \"cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26\": rpc error: code = NotFound desc = could not find container \"cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26\": container with ID starting with cecd8945a93bb626ad8946b1ad834a754368455128f0f1d06cc9f5f0b66e0a26 not found: ID does not exist" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.565536 4754 scope.go:117] "RemoveContainer" containerID="d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.565947 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b\": container with ID starting with d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b not found: ID does not exist" containerID="d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.565985 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b"} err="failed to get container status \"d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b\": rpc error: code = NotFound desc = could not find container \"d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b\": container with ID starting with d422231a8abf23b993b76a3cba42aed2783d62276cd45448a31d85e610f6e58b not found: ID does not exist" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.566016 4754 scope.go:117] "RemoveContainer" containerID="a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.566316 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc\": container with ID starting with a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc not found: ID does not exist" containerID="a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.566356 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc"} err="failed to get container status \"a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc\": rpc error: code = NotFound desc = could not find container \"a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc\": container with ID starting with a354e64e8eb8d6fcc06e3b7f327af4de87694f6cce668e898e64defd60dc42cc not found: ID does not exist" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.566380 4754 scope.go:117] "RemoveContainer" containerID="668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.566996 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8\": container with ID starting with 668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8 not found: ID does not exist" containerID="668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.567038 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8"} err="failed to get container status \"668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8\": rpc error: code = NotFound desc = could not find container \"668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8\": container with ID starting with 668f372952d2d37fd5402f3721dc3af5a4e5265b731ec128c9bae4318c3607f8 not found: ID does not exist" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.600832 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.644609 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.657622 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/704d85f0-74f3-4d98-9c2d-0f499ac3df31-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.657660 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.657671 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.657688 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcvt8\" (UniqueName: \"kubernetes.io/projected/704d85f0-74f3-4d98-9c2d-0f499ac3df31-kube-api-access-rcvt8\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.657697 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.687521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-config-data" (OuterVolumeSpecName: "config-data") pod "704d85f0-74f3-4d98-9c2d-0f499ac3df31" (UID: "704d85f0-74f3-4d98-9c2d-0f499ac3df31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.759341 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/704d85f0-74f3-4d98-9c2d-0f499ac3df31-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.793275 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.806860 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.823363 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.824828 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-notification-agent" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.824905 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-notification-agent" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.824945 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="sg-core" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.824951 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="sg-core" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.824971 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-central-agent" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.824982 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-central-agent" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.824991 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="proxy-httpd" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.825000 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="proxy-httpd" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.825427 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="proxy-httpd" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.825453 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-notification-agent" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.825468 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="sg-core" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.825481 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" containerName="ceilometer-central-agent" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.829394 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.833843 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.834000 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.853745 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963059 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-scripts\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963146 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963197 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963268 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963350 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrjc\" (UniqueName: \"kubernetes.io/projected/8c101acb-e6ac-4e92-8a34-8641c837b9c7-kube-api-access-kjrjc\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963411 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: I0218 19:40:09.963568 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-config-data\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:09 crc kubenswrapper[4754]: E0218 19:40:09.978318 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod704d85f0_74f3_4d98_9c2d_0f499ac3df31.slice\": RecentStats: unable to find data in memory cache]" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.065500 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrjc\" (UniqueName: \"kubernetes.io/projected/8c101acb-e6ac-4e92-8a34-8641c837b9c7-kube-api-access-kjrjc\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.065842 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.065934 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-config-data\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.066007 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-scripts\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.066064 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.066093 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.066127 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.066605 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-run-httpd\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.066762 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-log-httpd\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.069926 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.070179 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-scripts\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.070229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.071008 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-config-data\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.085704 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrjc\" (UniqueName: \"kubernetes.io/projected/8c101acb-e6ac-4e92-8a34-8641c837b9c7-kube-api-access-kjrjc\") pod \"ceilometer-0\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.155408 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.224945 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704d85f0-74f3-4d98-9c2d-0f499ac3df31" path="/var/lib/kubelet/pods/704d85f0-74f3-4d98-9c2d-0f499ac3df31/volumes" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.290819 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-mxznm"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.292132 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.328991 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mxznm"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.391627 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050eba1f-23da-4294-8cdf-4fad443211a2-operator-scripts\") pod \"nova-api-db-create-mxznm\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.391856 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqssz\" (UniqueName: \"kubernetes.io/projected/050eba1f-23da-4294-8cdf-4fad443211a2-kube-api-access-fqssz\") pod \"nova-api-db-create-mxznm\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.483643 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-gsxgh"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.485056 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.494252 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050eba1f-23da-4294-8cdf-4fad443211a2-operator-scripts\") pod \"nova-api-db-create-mxznm\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.494295 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqssz\" (UniqueName: \"kubernetes.io/projected/050eba1f-23da-4294-8cdf-4fad443211a2-kube-api-access-fqssz\") pod \"nova-api-db-create-mxznm\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.494982 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-gsxgh"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.495123 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050eba1f-23da-4294-8cdf-4fad443211a2-operator-scripts\") pod \"nova-api-db-create-mxznm\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.557126 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqssz\" (UniqueName: \"kubernetes.io/projected/050eba1f-23da-4294-8cdf-4fad443211a2-kube-api-access-fqssz\") pod \"nova-api-db-create-mxznm\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.596730 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287d345-c068-46eb-a185-ee415ed11ade-operator-scripts\") pod \"nova-cell0-db-create-gsxgh\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.597041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxfjd\" (UniqueName: \"kubernetes.io/projected/4287d345-c068-46eb-a185-ee415ed11ade-kube-api-access-vxfjd\") pod \"nova-cell0-db-create-gsxgh\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.597499 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pmshz"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.607893 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.611312 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.612174 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dec9-account-create-update-svm4r"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.613824 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.615047 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.629821 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pmshz"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.663189 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dec9-account-create-update-svm4r"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.699352 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec79017-52f6-47b7-b09f-c6ad2f738d97-operator-scripts\") pod \"nova-cell1-db-create-pmshz\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.699435 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8cn\" (UniqueName: \"kubernetes.io/projected/bec79017-52f6-47b7-b09f-c6ad2f738d97-kube-api-access-7l8cn\") pod \"nova-cell1-db-create-pmshz\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.699494 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287d345-c068-46eb-a185-ee415ed11ade-operator-scripts\") pod \"nova-cell0-db-create-gsxgh\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.699532 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxfjd\" (UniqueName: \"kubernetes.io/projected/4287d345-c068-46eb-a185-ee415ed11ade-kube-api-access-vxfjd\") pod \"nova-cell0-db-create-gsxgh\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.700988 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287d345-c068-46eb-a185-ee415ed11ade-operator-scripts\") pod \"nova-cell0-db-create-gsxgh\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.716241 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxfjd\" (UniqueName: \"kubernetes.io/projected/4287d345-c068-46eb-a185-ee415ed11ade-kube-api-access-vxfjd\") pod \"nova-cell0-db-create-gsxgh\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.780558 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5aeb-account-create-update-w2g2d"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.782174 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.785514 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.800962 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec79017-52f6-47b7-b09f-c6ad2f738d97-operator-scripts\") pod \"nova-cell1-db-create-pmshz\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.801022 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbjb\" (UniqueName: \"kubernetes.io/projected/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-kube-api-access-dwbjb\") pod \"nova-api-dec9-account-create-update-svm4r\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.801075 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8cn\" (UniqueName: \"kubernetes.io/projected/bec79017-52f6-47b7-b09f-c6ad2f738d97-kube-api-access-7l8cn\") pod \"nova-cell1-db-create-pmshz\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.801340 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-operator-scripts\") pod \"nova-api-dec9-account-create-update-svm4r\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.802084 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec79017-52f6-47b7-b09f-c6ad2f738d97-operator-scripts\") pod \"nova-cell1-db-create-pmshz\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.815470 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5aeb-account-create-update-w2g2d"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.823156 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.830284 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8cn\" (UniqueName: \"kubernetes.io/projected/bec79017-52f6-47b7-b09f-c6ad2f738d97-kube-api-access-7l8cn\") pod \"nova-cell1-db-create-pmshz\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.902782 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d6ef-account-create-update-l2xhf"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.904277 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.904630 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-operator-scripts\") pod \"nova-api-dec9-account-create-update-svm4r\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.904707 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154b46c9-21a8-42bc-897c-51c2c9691dd1-operator-scripts\") pod \"nova-cell0-5aeb-account-create-update-w2g2d\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.904761 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbjb\" (UniqueName: \"kubernetes.io/projected/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-kube-api-access-dwbjb\") pod \"nova-api-dec9-account-create-update-svm4r\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.904803 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snlxj\" (UniqueName: \"kubernetes.io/projected/154b46c9-21a8-42bc-897c-51c2c9691dd1-kube-api-access-snlxj\") pod \"nova-cell0-5aeb-account-create-update-w2g2d\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.907758 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-operator-scripts\") pod \"nova-api-dec9-account-create-update-svm4r\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.918936 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.928917 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbjb\" (UniqueName: \"kubernetes.io/projected/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-kube-api-access-dwbjb\") pod \"nova-api-dec9-account-create-update-svm4r\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.929447 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.934605 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:10 crc kubenswrapper[4754]: I0218 19:40:10.941297 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d6ef-account-create-update-l2xhf"] Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.007193 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7178952f-0bf0-472c-9a3f-0c0794b32590-operator-scripts\") pod \"nova-cell1-d6ef-account-create-update-l2xhf\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.007242 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154b46c9-21a8-42bc-897c-51c2c9691dd1-operator-scripts\") pod \"nova-cell0-5aeb-account-create-update-w2g2d\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.007291 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kqwj\" (UniqueName: \"kubernetes.io/projected/7178952f-0bf0-472c-9a3f-0c0794b32590-kube-api-access-9kqwj\") pod \"nova-cell1-d6ef-account-create-update-l2xhf\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.007336 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snlxj\" (UniqueName: \"kubernetes.io/projected/154b46c9-21a8-42bc-897c-51c2c9691dd1-kube-api-access-snlxj\") pod \"nova-cell0-5aeb-account-create-update-w2g2d\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.008431 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154b46c9-21a8-42bc-897c-51c2c9691dd1-operator-scripts\") pod \"nova-cell0-5aeb-account-create-update-w2g2d\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.031020 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snlxj\" (UniqueName: \"kubernetes.io/projected/154b46c9-21a8-42bc-897c-51c2c9691dd1-kube-api-access-snlxj\") pod \"nova-cell0-5aeb-account-create-update-w2g2d\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.098997 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.108960 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kqwj\" (UniqueName: \"kubernetes.io/projected/7178952f-0bf0-472c-9a3f-0c0794b32590-kube-api-access-9kqwj\") pod \"nova-cell1-d6ef-account-create-update-l2xhf\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.109163 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7178952f-0bf0-472c-9a3f-0c0794b32590-operator-scripts\") pod \"nova-cell1-d6ef-account-create-update-l2xhf\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.109920 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7178952f-0bf0-472c-9a3f-0c0794b32590-operator-scripts\") pod \"nova-cell1-d6ef-account-create-update-l2xhf\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.115931 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.144809 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kqwj\" (UniqueName: \"kubernetes.io/projected/7178952f-0bf0-472c-9a3f-0c0794b32590-kube-api-access-9kqwj\") pod \"nova-cell1-d6ef-account-create-update-l2xhf\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.231725 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.256977 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-gsxgh"] Feb 18 19:40:11 crc kubenswrapper[4754]: W0218 19:40:11.287539 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4287d345_c068_46eb_a185_ee415ed11ade.slice/crio-c614605bdeea3a7946329f5ccf119f52213beb16c54f9383699ebfd4ca8bad3b WatchSource:0}: Error finding container c614605bdeea3a7946329f5ccf119f52213beb16c54f9383699ebfd4ca8bad3b: Status 404 returned error can't find the container with id c614605bdeea3a7946329f5ccf119f52213beb16c54f9383699ebfd4ca8bad3b Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.290567 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-mxznm"] Feb 18 19:40:11 crc kubenswrapper[4754]: W0218 19:40:11.313845 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod050eba1f_23da_4294_8cdf_4fad443211a2.slice/crio-b4ceee819f3ccda7ac8a2649c30ac23551014b4bb0a677fdaa91a6990cbc52a6 WatchSource:0}: Error finding container b4ceee819f3ccda7ac8a2649c30ac23551014b4bb0a677fdaa91a6990cbc52a6: Status 404 returned error can't find the container with id b4ceee819f3ccda7ac8a2649c30ac23551014b4bb0a677fdaa91a6990cbc52a6 Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.508770 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerStarted","Data":"f033ce13e4d5fc6bab63c39ed3f8e84d144dbec32c8891cf839dfe1d34e5297c"} Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.515805 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gsxgh" event={"ID":"4287d345-c068-46eb-a185-ee415ed11ade","Type":"ContainerStarted","Data":"c614605bdeea3a7946329f5ccf119f52213beb16c54f9383699ebfd4ca8bad3b"} Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.522189 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mxznm" event={"ID":"050eba1f-23da-4294-8cdf-4fad443211a2","Type":"ContainerStarted","Data":"b4ceee819f3ccda7ac8a2649c30ac23551014b4bb0a677fdaa91a6990cbc52a6"} Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.524253 4754 generic.go:334] "Generic (PLEG): container finished" podID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerID="88b802d23292dd5c618c19202d02440a41dec604bfd64271e8807d8dc39458ab" exitCode=0 Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.524307 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9577ccdb8-nfcx9" event={"ID":"8afcabe6-a035-4ecd-8522-93afd1691f25","Type":"ContainerDied","Data":"88b802d23292dd5c618c19202d02440a41dec604bfd64271e8807d8dc39458ab"} Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.625602 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pmshz"] Feb 18 19:40:11 crc kubenswrapper[4754]: I0218 19:40:11.903131 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d6ef-account-create-update-l2xhf"] Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.017022 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dec9-account-create-update-svm4r"] Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.055402 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5aeb-account-create-update-w2g2d"] Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.538098 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" event={"ID":"154b46c9-21a8-42bc-897c-51c2c9691dd1","Type":"ContainerStarted","Data":"09802535f499ec8c8fcdf237c63954a07d837d25c3d20fa6a4e67a571f394775"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.538742 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" event={"ID":"154b46c9-21a8-42bc-897c-51c2c9691dd1","Type":"ContainerStarted","Data":"1681854e7acf374f2badb200d60c83ac04d6e6e9eba55a86369bbe775600dfb8"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.546756 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dec9-account-create-update-svm4r" event={"ID":"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608","Type":"ContainerStarted","Data":"846d33c5dccf93caa4ea82def9127153fdfb66d491a261f7625068799e343d33"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.546848 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dec9-account-create-update-svm4r" event={"ID":"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608","Type":"ContainerStarted","Data":"e8ffdf6d2b8bb299f3d5d40e88a4dd5f59804c856deea8c61a4288d9dfb73cc1"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.566670 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" event={"ID":"7178952f-0bf0-472c-9a3f-0c0794b32590","Type":"ContainerStarted","Data":"8a25a6648529459b524d55d22e89f2da996df30cb2103454ce944202b4041ace"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.566740 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" event={"ID":"7178952f-0bf0-472c-9a3f-0c0794b32590","Type":"ContainerStarted","Data":"5bd99769dd778a84304ae48726b1228ea2de09a74078667ffa728749eba1163f"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.571977 4754 generic.go:334] "Generic (PLEG): container finished" podID="4287d345-c068-46eb-a185-ee415ed11ade" containerID="669bf70abd2469d0e706fd74b2069e8b88c57ab19c56fa45b5d7b186fd66a7bf" exitCode=0 Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.572084 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gsxgh" event={"ID":"4287d345-c068-46eb-a185-ee415ed11ade","Type":"ContainerDied","Data":"669bf70abd2469d0e706fd74b2069e8b88c57ab19c56fa45b5d7b186fd66a7bf"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.574678 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" podStartSLOduration=2.574649852 podStartE2EDuration="2.574649852s" podCreationTimestamp="2026-02-18 19:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:12.563444642 +0000 UTC m=+1315.013857438" watchObservedRunningTime="2026-02-18 19:40:12.574649852 +0000 UTC m=+1315.025062648" Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.579323 4754 generic.go:334] "Generic (PLEG): container finished" podID="bec79017-52f6-47b7-b09f-c6ad2f738d97" containerID="cef6e2514d9e40174c53b809235e91a5fbf4cbf1ee371e8cabb38bef985c866a" exitCode=0 Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.579473 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pmshz" event={"ID":"bec79017-52f6-47b7-b09f-c6ad2f738d97","Type":"ContainerDied","Data":"cef6e2514d9e40174c53b809235e91a5fbf4cbf1ee371e8cabb38bef985c866a"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.579535 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pmshz" event={"ID":"bec79017-52f6-47b7-b09f-c6ad2f738d97","Type":"ContainerStarted","Data":"793e633ba2db34af5812f7f620e95e26466dd87e48dbc1dd8dd86fb0d87228b1"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.589308 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-dec9-account-create-update-svm4r" podStartSLOduration=2.589285292 podStartE2EDuration="2.589285292s" podCreationTimestamp="2026-02-18 19:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:12.582286386 +0000 UTC m=+1315.032699182" watchObservedRunningTime="2026-02-18 19:40:12.589285292 +0000 UTC m=+1315.039698098" Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.592651 4754 generic.go:334] "Generic (PLEG): container finished" podID="050eba1f-23da-4294-8cdf-4fad443211a2" containerID="38868d996db43619ee1a628c5127ced8fd6e1d705c8afde1689d6ad3ba18032a" exitCode=0 Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.592814 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mxznm" event={"ID":"050eba1f-23da-4294-8cdf-4fad443211a2","Type":"ContainerDied","Data":"38868d996db43619ee1a628c5127ced8fd6e1d705c8afde1689d6ad3ba18032a"} Feb 18 19:40:12 crc kubenswrapper[4754]: I0218 19:40:12.598008 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerStarted","Data":"206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382"} Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.015686 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.610882 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerStarted","Data":"9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3"} Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.612895 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerStarted","Data":"57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c"} Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.613049 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" event={"ID":"154b46c9-21a8-42bc-897c-51c2c9691dd1","Type":"ContainerDied","Data":"09802535f499ec8c8fcdf237c63954a07d837d25c3d20fa6a4e67a571f394775"} Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.612418 4754 generic.go:334] "Generic (PLEG): container finished" podID="154b46c9-21a8-42bc-897c-51c2c9691dd1" containerID="09802535f499ec8c8fcdf237c63954a07d837d25c3d20fa6a4e67a571f394775" exitCode=0 Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.614983 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" containerID="846d33c5dccf93caa4ea82def9127153fdfb66d491a261f7625068799e343d33" exitCode=0 Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.615074 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dec9-account-create-update-svm4r" event={"ID":"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608","Type":"ContainerDied","Data":"846d33c5dccf93caa4ea82def9127153fdfb66d491a261f7625068799e343d33"} Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.618365 4754 generic.go:334] "Generic (PLEG): container finished" podID="7178952f-0bf0-472c-9a3f-0c0794b32590" containerID="8a25a6648529459b524d55d22e89f2da996df30cb2103454ce944202b4041ace" exitCode=0 Feb 18 19:40:13 crc kubenswrapper[4754]: I0218 19:40:13.618427 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" event={"ID":"7178952f-0bf0-472c-9a3f-0c0794b32590","Type":"ContainerDied","Data":"8a25a6648529459b524d55d22e89f2da996df30cb2103454ce944202b4041ace"} Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.160624 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.296217 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050eba1f-23da-4294-8cdf-4fad443211a2-operator-scripts\") pod \"050eba1f-23da-4294-8cdf-4fad443211a2\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.296343 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqssz\" (UniqueName: \"kubernetes.io/projected/050eba1f-23da-4294-8cdf-4fad443211a2-kube-api-access-fqssz\") pod \"050eba1f-23da-4294-8cdf-4fad443211a2\" (UID: \"050eba1f-23da-4294-8cdf-4fad443211a2\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.298500 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050eba1f-23da-4294-8cdf-4fad443211a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "050eba1f-23da-4294-8cdf-4fad443211a2" (UID: "050eba1f-23da-4294-8cdf-4fad443211a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.308624 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050eba1f-23da-4294-8cdf-4fad443211a2-kube-api-access-fqssz" (OuterVolumeSpecName: "kube-api-access-fqssz") pod "050eba1f-23da-4294-8cdf-4fad443211a2" (UID: "050eba1f-23da-4294-8cdf-4fad443211a2"). InnerVolumeSpecName "kube-api-access-fqssz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.398227 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/050eba1f-23da-4294-8cdf-4fad443211a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.398270 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqssz\" (UniqueName: \"kubernetes.io/projected/050eba1f-23da-4294-8cdf-4fad443211a2-kube-api-access-fqssz\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.399873 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.407650 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.414711 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499000 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxfjd\" (UniqueName: \"kubernetes.io/projected/4287d345-c068-46eb-a185-ee415ed11ade-kube-api-access-vxfjd\") pod \"4287d345-c068-46eb-a185-ee415ed11ade\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499066 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287d345-c068-46eb-a185-ee415ed11ade-operator-scripts\") pod \"4287d345-c068-46eb-a185-ee415ed11ade\" (UID: \"4287d345-c068-46eb-a185-ee415ed11ade\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499128 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7178952f-0bf0-472c-9a3f-0c0794b32590-operator-scripts\") pod \"7178952f-0bf0-472c-9a3f-0c0794b32590\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499275 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec79017-52f6-47b7-b09f-c6ad2f738d97-operator-scripts\") pod \"bec79017-52f6-47b7-b09f-c6ad2f738d97\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499402 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l8cn\" (UniqueName: \"kubernetes.io/projected/bec79017-52f6-47b7-b09f-c6ad2f738d97-kube-api-access-7l8cn\") pod \"bec79017-52f6-47b7-b09f-c6ad2f738d97\" (UID: \"bec79017-52f6-47b7-b09f-c6ad2f738d97\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499444 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kqwj\" (UniqueName: \"kubernetes.io/projected/7178952f-0bf0-472c-9a3f-0c0794b32590-kube-api-access-9kqwj\") pod \"7178952f-0bf0-472c-9a3f-0c0794b32590\" (UID: \"7178952f-0bf0-472c-9a3f-0c0794b32590\") " Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499647 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4287d345-c068-46eb-a185-ee415ed11ade-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4287d345-c068-46eb-a185-ee415ed11ade" (UID: "4287d345-c068-46eb-a185-ee415ed11ade"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499677 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7178952f-0bf0-472c-9a3f-0c0794b32590-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7178952f-0bf0-472c-9a3f-0c0794b32590" (UID: "7178952f-0bf0-472c-9a3f-0c0794b32590"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.499772 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec79017-52f6-47b7-b09f-c6ad2f738d97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bec79017-52f6-47b7-b09f-c6ad2f738d97" (UID: "bec79017-52f6-47b7-b09f-c6ad2f738d97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.500295 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4287d345-c068-46eb-a185-ee415ed11ade-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.500325 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7178952f-0bf0-472c-9a3f-0c0794b32590-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.500341 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec79017-52f6-47b7-b09f-c6ad2f738d97-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.503349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4287d345-c068-46eb-a185-ee415ed11ade-kube-api-access-vxfjd" (OuterVolumeSpecName: "kube-api-access-vxfjd") pod "4287d345-c068-46eb-a185-ee415ed11ade" (UID: "4287d345-c068-46eb-a185-ee415ed11ade"). InnerVolumeSpecName "kube-api-access-vxfjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.503399 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec79017-52f6-47b7-b09f-c6ad2f738d97-kube-api-access-7l8cn" (OuterVolumeSpecName: "kube-api-access-7l8cn") pod "bec79017-52f6-47b7-b09f-c6ad2f738d97" (UID: "bec79017-52f6-47b7-b09f-c6ad2f738d97"). InnerVolumeSpecName "kube-api-access-7l8cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.505294 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7178952f-0bf0-472c-9a3f-0c0794b32590-kube-api-access-9kqwj" (OuterVolumeSpecName: "kube-api-access-9kqwj") pod "7178952f-0bf0-472c-9a3f-0c0794b32590" (UID: "7178952f-0bf0-472c-9a3f-0c0794b32590"). InnerVolumeSpecName "kube-api-access-9kqwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.602472 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l8cn\" (UniqueName: \"kubernetes.io/projected/bec79017-52f6-47b7-b09f-c6ad2f738d97-kube-api-access-7l8cn\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.602841 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kqwj\" (UniqueName: \"kubernetes.io/projected/7178952f-0bf0-472c-9a3f-0c0794b32590-kube-api-access-9kqwj\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.602932 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxfjd\" (UniqueName: \"kubernetes.io/projected/4287d345-c068-46eb-a185-ee415ed11ade-kube-api-access-vxfjd\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.627970 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" event={"ID":"7178952f-0bf0-472c-9a3f-0c0794b32590","Type":"ContainerDied","Data":"5bd99769dd778a84304ae48726b1228ea2de09a74078667ffa728749eba1163f"} Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.628017 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd99769dd778a84304ae48726b1228ea2de09a74078667ffa728749eba1163f" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.629507 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ef-account-create-update-l2xhf" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.629966 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gsxgh" event={"ID":"4287d345-c068-46eb-a185-ee415ed11ade","Type":"ContainerDied","Data":"c614605bdeea3a7946329f5ccf119f52213beb16c54f9383699ebfd4ca8bad3b"} Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.629990 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c614605bdeea3a7946329f5ccf119f52213beb16c54f9383699ebfd4ca8bad3b" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.630032 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gsxgh" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.641662 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pmshz" event={"ID":"bec79017-52f6-47b7-b09f-c6ad2f738d97","Type":"ContainerDied","Data":"793e633ba2db34af5812f7f620e95e26466dd87e48dbc1dd8dd86fb0d87228b1"} Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.641704 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="793e633ba2db34af5812f7f620e95e26466dd87e48dbc1dd8dd86fb0d87228b1" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.642179 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmshz" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.656059 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-mxznm" event={"ID":"050eba1f-23da-4294-8cdf-4fad443211a2","Type":"ContainerDied","Data":"b4ceee819f3ccda7ac8a2649c30ac23551014b4bb0a677fdaa91a6990cbc52a6"} Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.656304 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ceee819f3ccda7ac8a2649c30ac23551014b4bb0a677fdaa91a6990cbc52a6" Feb 18 19:40:14 crc kubenswrapper[4754]: I0218 19:40:14.656330 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-mxznm" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.025348 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.117258 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154b46c9-21a8-42bc-897c-51c2c9691dd1-operator-scripts\") pod \"154b46c9-21a8-42bc-897c-51c2c9691dd1\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.117320 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snlxj\" (UniqueName: \"kubernetes.io/projected/154b46c9-21a8-42bc-897c-51c2c9691dd1-kube-api-access-snlxj\") pod \"154b46c9-21a8-42bc-897c-51c2c9691dd1\" (UID: \"154b46c9-21a8-42bc-897c-51c2c9691dd1\") " Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.118734 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154b46c9-21a8-42bc-897c-51c2c9691dd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "154b46c9-21a8-42bc-897c-51c2c9691dd1" (UID: "154b46c9-21a8-42bc-897c-51c2c9691dd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.124482 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154b46c9-21a8-42bc-897c-51c2c9691dd1-kube-api-access-snlxj" (OuterVolumeSpecName: "kube-api-access-snlxj") pod "154b46c9-21a8-42bc-897c-51c2c9691dd1" (UID: "154b46c9-21a8-42bc-897c-51c2c9691dd1"). InnerVolumeSpecName "kube-api-access-snlxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.215547 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.220278 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/154b46c9-21a8-42bc-897c-51c2c9691dd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.220320 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snlxj\" (UniqueName: \"kubernetes.io/projected/154b46c9-21a8-42bc-897c-51c2c9691dd1-kube-api-access-snlxj\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.321240 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbjb\" (UniqueName: \"kubernetes.io/projected/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-kube-api-access-dwbjb\") pod \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.321359 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-operator-scripts\") pod \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\" (UID: \"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608\") " Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.323606 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" (UID: "1ba06bd2-7464-4e3b-bb9f-cbafc0f44608"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.329418 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-kube-api-access-dwbjb" (OuterVolumeSpecName: "kube-api-access-dwbjb") pod "1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" (UID: "1ba06bd2-7464-4e3b-bb9f-cbafc0f44608"). InnerVolumeSpecName "kube-api-access-dwbjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.427100 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbjb\" (UniqueName: \"kubernetes.io/projected/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-kube-api-access-dwbjb\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.427185 4754 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.672078 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerStarted","Data":"a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae"} Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.673215 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.676699 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.676708 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5aeb-account-create-update-w2g2d" event={"ID":"154b46c9-21a8-42bc-897c-51c2c9691dd1","Type":"ContainerDied","Data":"1681854e7acf374f2badb200d60c83ac04d6e6e9eba55a86369bbe775600dfb8"} Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.676764 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1681854e7acf374f2badb200d60c83ac04d6e6e9eba55a86369bbe775600dfb8" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.678947 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dec9-account-create-update-svm4r" event={"ID":"1ba06bd2-7464-4e3b-bb9f-cbafc0f44608","Type":"ContainerDied","Data":"e8ffdf6d2b8bb299f3d5d40e88a4dd5f59804c856deea8c61a4288d9dfb73cc1"} Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.678983 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ffdf6d2b8bb299f3d5d40e88a4dd5f59804c856deea8c61a4288d9dfb73cc1" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.679040 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dec9-account-create-update-svm4r" Feb 18 19:40:15 crc kubenswrapper[4754]: I0218 19:40:15.699071 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.275447814 podStartE2EDuration="6.699047659s" podCreationTimestamp="2026-02-18 19:40:09 +0000 UTC" firstStartedPulling="2026-02-18 19:40:10.990941364 +0000 UTC m=+1313.441354160" lastFinishedPulling="2026-02-18 19:40:15.414541209 +0000 UTC m=+1317.864954005" observedRunningTime="2026-02-18 19:40:15.692729243 +0000 UTC m=+1318.143142059" watchObservedRunningTime="2026-02-18 19:40:15.699047659 +0000 UTC m=+1318.149460455" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.953578 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjgmn"] Feb 18 19:40:20 crc kubenswrapper[4754]: E0218 19:40:20.957196 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4287d345-c068-46eb-a185-ee415ed11ade" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957230 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="4287d345-c068-46eb-a185-ee415ed11ade" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: E0218 19:40:20.957247 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154b46c9-21a8-42bc-897c-51c2c9691dd1" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957253 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="154b46c9-21a8-42bc-897c-51c2c9691dd1" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: E0218 19:40:20.957264 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec79017-52f6-47b7-b09f-c6ad2f738d97" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957277 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec79017-52f6-47b7-b09f-c6ad2f738d97" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: E0218 19:40:20.957291 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957300 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: E0218 19:40:20.957318 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050eba1f-23da-4294-8cdf-4fad443211a2" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957323 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="050eba1f-23da-4294-8cdf-4fad443211a2" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: E0218 19:40:20.957336 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7178952f-0bf0-472c-9a3f-0c0794b32590" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957342 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7178952f-0bf0-472c-9a3f-0c0794b32590" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957540 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="154b46c9-21a8-42bc-897c-51c2c9691dd1" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957551 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec79017-52f6-47b7-b09f-c6ad2f738d97" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957562 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957576 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="4287d345-c068-46eb-a185-ee415ed11ade" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957594 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="050eba1f-23da-4294-8cdf-4fad443211a2" containerName="mariadb-database-create" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.957602 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7178952f-0bf0-472c-9a3f-0c0794b32590" containerName="mariadb-account-create-update" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.958285 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.967275 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.967532 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.974112 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k6gj6" Feb 18 19:40:20 crc kubenswrapper[4754]: I0218 19:40:20.977534 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjgmn"] Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.037224 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-config-data\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.037469 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-scripts\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.037514 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.037593 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w58tc\" (UniqueName: \"kubernetes.io/projected/41e05b6b-58e2-448b-b109-a8b149061c37-kube-api-access-w58tc\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.139278 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w58tc\" (UniqueName: \"kubernetes.io/projected/41e05b6b-58e2-448b-b109-a8b149061c37-kube-api-access-w58tc\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.139481 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-config-data\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.139525 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-scripts\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.139611 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.145831 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-scripts\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.148249 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.150816 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-config-data\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.158060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w58tc\" (UniqueName: \"kubernetes.io/projected/41e05b6b-58e2-448b-b109-a8b149061c37-kube-api-access-w58tc\") pod \"nova-cell0-conductor-db-sync-mjgmn\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.277286 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:21 crc kubenswrapper[4754]: W0218 19:40:21.818373 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41e05b6b_58e2_448b_b109_a8b149061c37.slice/crio-a48a5dc3d1b07dea20131d4507545ca65828c58999853c31498ab41c276233c4 WatchSource:0}: Error finding container a48a5dc3d1b07dea20131d4507545ca65828c58999853c31498ab41c276233c4: Status 404 returned error can't find the container with id a48a5dc3d1b07dea20131d4507545ca65828c58999853c31498ab41c276233c4 Feb 18 19:40:21 crc kubenswrapper[4754]: I0218 19:40:21.821210 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjgmn"] Feb 18 19:40:22 crc kubenswrapper[4754]: I0218 19:40:22.748989 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" event={"ID":"41e05b6b-58e2-448b-b109-a8b149061c37","Type":"ContainerStarted","Data":"a48a5dc3d1b07dea20131d4507545ca65828c58999853c31498ab41c276233c4"} Feb 18 19:40:23 crc kubenswrapper[4754]: I0218 19:40:23.016115 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 18 19:40:29 crc kubenswrapper[4754]: I0218 19:40:29.826438 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" event={"ID":"41e05b6b-58e2-448b-b109-a8b149061c37","Type":"ContainerStarted","Data":"0be47ccea0a8348510d65e01713426445bfb6ace646e55211e7f98fb8e858138"} Feb 18 19:40:29 crc kubenswrapper[4754]: I0218 19:40:29.853392 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" podStartSLOduration=2.50585111 podStartE2EDuration="9.853359977s" podCreationTimestamp="2026-02-18 19:40:20 +0000 UTC" firstStartedPulling="2026-02-18 19:40:21.823823941 +0000 UTC m=+1324.274236737" lastFinishedPulling="2026-02-18 19:40:29.171332808 +0000 UTC m=+1331.621745604" observedRunningTime="2026-02-18 19:40:29.842826336 +0000 UTC m=+1332.293239142" watchObservedRunningTime="2026-02-18 19:40:29.853359977 +0000 UTC m=+1332.303772773" Feb 18 19:40:33 crc kubenswrapper[4754]: I0218 19:40:33.015531 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9577ccdb8-nfcx9" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Feb 18 19:40:33 crc kubenswrapper[4754]: I0218 19:40:33.015991 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:40:37 crc kubenswrapper[4754]: I0218 19:40:37.952203 4754 generic.go:334] "Generic (PLEG): container finished" podID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerID="6a052b7efef88a77b09203dde939054683d4837942b9da7c71526e02fa3db66f" exitCode=137 Feb 18 19:40:37 crc kubenswrapper[4754]: I0218 19:40:37.952348 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9577ccdb8-nfcx9" event={"ID":"8afcabe6-a035-4ecd-8522-93afd1691f25","Type":"ContainerDied","Data":"6a052b7efef88a77b09203dde939054683d4837942b9da7c71526e02fa3db66f"} Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.050928 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.222997 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgmv9\" (UniqueName: \"kubernetes.io/projected/8afcabe6-a035-4ecd-8522-93afd1691f25-kube-api-access-xgmv9\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.223309 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-scripts\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.223414 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-secret-key\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.223453 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-combined-ca-bundle\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.223469 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-tls-certs\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.223549 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afcabe6-a035-4ecd-8522-93afd1691f25-logs\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.223726 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-config-data\") pod \"8afcabe6-a035-4ecd-8522-93afd1691f25\" (UID: \"8afcabe6-a035-4ecd-8522-93afd1691f25\") " Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.224029 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8afcabe6-a035-4ecd-8522-93afd1691f25-logs" (OuterVolumeSpecName: "logs") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.224204 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afcabe6-a035-4ecd-8522-93afd1691f25-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.233329 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.233461 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8afcabe6-a035-4ecd-8522-93afd1691f25-kube-api-access-xgmv9" (OuterVolumeSpecName: "kube-api-access-xgmv9") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "kube-api-access-xgmv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.253036 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-config-data" (OuterVolumeSpecName: "config-data") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.253380 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.254109 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-scripts" (OuterVolumeSpecName: "scripts") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.278504 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8afcabe6-a035-4ecd-8522-93afd1691f25" (UID: "8afcabe6-a035-4ecd-8522-93afd1691f25"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.325913 4754 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.325944 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.325953 4754 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afcabe6-a035-4ecd-8522-93afd1691f25-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.325962 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.325974 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgmv9\" (UniqueName: \"kubernetes.io/projected/8afcabe6-a035-4ecd-8522-93afd1691f25-kube-api-access-xgmv9\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.325985 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8afcabe6-a035-4ecd-8522-93afd1691f25-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.962337 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9577ccdb8-nfcx9" event={"ID":"8afcabe6-a035-4ecd-8522-93afd1691f25","Type":"ContainerDied","Data":"23a146e392a33561291555e5b62b09042e776d4f7b8a02525e4e5b6ed6d6dd1c"} Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.962750 4754 scope.go:117] "RemoveContainer" containerID="88b802d23292dd5c618c19202d02440a41dec604bfd64271e8807d8dc39458ab" Feb 18 19:40:38 crc kubenswrapper[4754]: I0218 19:40:38.962428 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9577ccdb8-nfcx9" Feb 18 19:40:39 crc kubenswrapper[4754]: I0218 19:40:39.022508 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9577ccdb8-nfcx9"] Feb 18 19:40:39 crc kubenswrapper[4754]: I0218 19:40:39.039525 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9577ccdb8-nfcx9"] Feb 18 19:40:39 crc kubenswrapper[4754]: I0218 19:40:39.144485 4754 scope.go:117] "RemoveContainer" containerID="6a052b7efef88a77b09203dde939054683d4837942b9da7c71526e02fa3db66f" Feb 18 19:40:40 crc kubenswrapper[4754]: I0218 19:40:40.171120 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:40:40 crc kubenswrapper[4754]: I0218 19:40:40.227051 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" path="/var/lib/kubelet/pods/8afcabe6-a035-4ecd-8522-93afd1691f25/volumes" Feb 18 19:40:41 crc kubenswrapper[4754]: I0218 19:40:41.067416 4754 generic.go:334] "Generic (PLEG): container finished" podID="41e05b6b-58e2-448b-b109-a8b149061c37" containerID="0be47ccea0a8348510d65e01713426445bfb6ace646e55211e7f98fb8e858138" exitCode=0 Feb 18 19:40:41 crc kubenswrapper[4754]: I0218 19:40:41.067674 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" event={"ID":"41e05b6b-58e2-448b-b109-a8b149061c37","Type":"ContainerDied","Data":"0be47ccea0a8348510d65e01713426445bfb6ace646e55211e7f98fb8e858138"} Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.487003 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.613893 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-combined-ca-bundle\") pod \"41e05b6b-58e2-448b-b109-a8b149061c37\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.613973 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w58tc\" (UniqueName: \"kubernetes.io/projected/41e05b6b-58e2-448b-b109-a8b149061c37-kube-api-access-w58tc\") pod \"41e05b6b-58e2-448b-b109-a8b149061c37\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.614255 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-config-data\") pod \"41e05b6b-58e2-448b-b109-a8b149061c37\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.614314 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-scripts\") pod \"41e05b6b-58e2-448b-b109-a8b149061c37\" (UID: \"41e05b6b-58e2-448b-b109-a8b149061c37\") " Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.623537 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e05b6b-58e2-448b-b109-a8b149061c37-kube-api-access-w58tc" (OuterVolumeSpecName: "kube-api-access-w58tc") pod "41e05b6b-58e2-448b-b109-a8b149061c37" (UID: "41e05b6b-58e2-448b-b109-a8b149061c37"). InnerVolumeSpecName "kube-api-access-w58tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.623554 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-scripts" (OuterVolumeSpecName: "scripts") pod "41e05b6b-58e2-448b-b109-a8b149061c37" (UID: "41e05b6b-58e2-448b-b109-a8b149061c37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.648349 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41e05b6b-58e2-448b-b109-a8b149061c37" (UID: "41e05b6b-58e2-448b-b109-a8b149061c37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.648654 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-config-data" (OuterVolumeSpecName: "config-data") pod "41e05b6b-58e2-448b-b109-a8b149061c37" (UID: "41e05b6b-58e2-448b-b109-a8b149061c37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.716834 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.716869 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.716878 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05b6b-58e2-448b-b109-a8b149061c37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:42 crc kubenswrapper[4754]: I0218 19:40:42.716892 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w58tc\" (UniqueName: \"kubernetes.io/projected/41e05b6b-58e2-448b-b109-a8b149061c37-kube-api-access-w58tc\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.091495 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" event={"ID":"41e05b6b-58e2-448b-b109-a8b149061c37","Type":"ContainerDied","Data":"a48a5dc3d1b07dea20131d4507545ca65828c58999853c31498ab41c276233c4"} Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.091547 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48a5dc3d1b07dea20131d4507545ca65828c58999853c31498ab41c276233c4" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.091915 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjgmn" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198086 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:40:43 crc kubenswrapper[4754]: E0218 19:40:43.198616 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon-log" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198643 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon-log" Feb 18 19:40:43 crc kubenswrapper[4754]: E0218 19:40:43.198678 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198687 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" Feb 18 19:40:43 crc kubenswrapper[4754]: E0218 19:40:43.198701 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e05b6b-58e2-448b-b109-a8b149061c37" containerName="nova-cell0-conductor-db-sync" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198709 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e05b6b-58e2-448b-b109-a8b149061c37" containerName="nova-cell0-conductor-db-sync" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198942 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e05b6b-58e2-448b-b109-a8b149061c37" containerName="nova-cell0-conductor-db-sync" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198964 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon-log" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.198980 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8afcabe6-a035-4ecd-8522-93afd1691f25" containerName="horizon" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.199777 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.204815 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.205622 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k6gj6" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.211537 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.327910 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.327950 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcpmm\" (UniqueName: \"kubernetes.io/projected/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-kube-api-access-gcpmm\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.327997 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.429799 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcpmm\" (UniqueName: \"kubernetes.io/projected/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-kube-api-access-gcpmm\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.429848 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.429918 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.433422 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.433620 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.445537 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcpmm\" (UniqueName: \"kubernetes.io/projected/afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf-kube-api-access-gcpmm\") pod \"nova-cell0-conductor-0\" (UID: \"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:43 crc kubenswrapper[4754]: I0218 19:40:43.520217 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:44 crc kubenswrapper[4754]: I0218 19:40:44.070182 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:40:44 crc kubenswrapper[4754]: I0218 19:40:44.107163 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf","Type":"ContainerStarted","Data":"0f4a37aebca271f885c8635211cf97ea900f274387671a1a5ddfd958156ca074"} Feb 18 19:40:44 crc kubenswrapper[4754]: I0218 19:40:44.464812 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:40:44 crc kubenswrapper[4754]: I0218 19:40:44.465486 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" containerName="kube-state-metrics" containerID="cri-o://22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6" gracePeriod=30 Feb 18 19:40:44 crc kubenswrapper[4754]: I0218 19:40:44.964524 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.062293 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnxs6\" (UniqueName: \"kubernetes.io/projected/f70d6e04-a01e-4213-83b3-b986177730f1-kube-api-access-qnxs6\") pod \"f70d6e04-a01e-4213-83b3-b986177730f1\" (UID: \"f70d6e04-a01e-4213-83b3-b986177730f1\") " Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.069137 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70d6e04-a01e-4213-83b3-b986177730f1-kube-api-access-qnxs6" (OuterVolumeSpecName: "kube-api-access-qnxs6") pod "f70d6e04-a01e-4213-83b3-b986177730f1" (UID: "f70d6e04-a01e-4213-83b3-b986177730f1"). InnerVolumeSpecName "kube-api-access-qnxs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.135969 4754 generic.go:334] "Generic (PLEG): container finished" podID="f70d6e04-a01e-4213-83b3-b986177730f1" containerID="22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6" exitCode=2 Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.136066 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f70d6e04-a01e-4213-83b3-b986177730f1","Type":"ContainerDied","Data":"22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6"} Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.136110 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f70d6e04-a01e-4213-83b3-b986177730f1","Type":"ContainerDied","Data":"5d2ada013616ea41ac629810bf801cf1c690e1cb2d02a80e92eba673abad8cdc"} Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.136132 4754 scope.go:117] "RemoveContainer" containerID="22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.138911 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.150720 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"afd7a3f0-7a9a-4929-aa09-ec2b2a887dcf","Type":"ContainerStarted","Data":"e87aee6d47b87d1f19e42a8ffc40c93e8838e3611be49679fa075c2388766b5c"} Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.151049 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.170835 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnxs6\" (UniqueName: \"kubernetes.io/projected/f70d6e04-a01e-4213-83b3-b986177730f1-kube-api-access-qnxs6\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.176235 4754 scope.go:117] "RemoveContainer" containerID="22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6" Feb 18 19:40:45 crc kubenswrapper[4754]: E0218 19:40:45.177531 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6\": container with ID starting with 22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6 not found: ID does not exist" containerID="22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.177692 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6"} err="failed to get container status \"22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6\": rpc error: code = NotFound desc = could not find container \"22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6\": container with ID starting with 22357159ca01aa7807978e2e49d9d640795bdd23952b4fcab4be090c073210c6 not found: ID does not exist" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.195263 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.195235476 podStartE2EDuration="2.195235476s" podCreationTimestamp="2026-02-18 19:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:45.165717856 +0000 UTC m=+1347.616130662" watchObservedRunningTime="2026-02-18 19:40:45.195235476 +0000 UTC m=+1347.645648432" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.208287 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.240225 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.253221 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:40:45 crc kubenswrapper[4754]: E0218 19:40:45.253721 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" containerName="kube-state-metrics" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.253741 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" containerName="kube-state-metrics" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.253937 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" containerName="kube-state-metrics" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.254818 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.258376 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.258534 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.262725 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.374625 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.375022 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.375136 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.375185 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnhx\" (UniqueName: \"kubernetes.io/projected/c00d0688-2346-4f50-8a83-b31a45899ace-kube-api-access-mtnhx\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.476755 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.476921 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.476975 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.476999 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnhx\" (UniqueName: \"kubernetes.io/projected/c00d0688-2346-4f50-8a83-b31a45899ace-kube-api-access-mtnhx\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.481330 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.483298 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.484058 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00d0688-2346-4f50-8a83-b31a45899ace-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.502463 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnhx\" (UniqueName: \"kubernetes.io/projected/c00d0688-2346-4f50-8a83-b31a45899ace-kube-api-access-mtnhx\") pod \"kube-state-metrics-0\" (UID: \"c00d0688-2346-4f50-8a83-b31a45899ace\") " pod="openstack/kube-state-metrics-0" Feb 18 19:40:45 crc kubenswrapper[4754]: I0218 19:40:45.577116 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.093868 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:40:46 crc kubenswrapper[4754]: W0218 19:40:46.105415 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc00d0688_2346_4f50_8a83_b31a45899ace.slice/crio-862b53ee0f444a58718498787ef5eed2515798403ef7030a59888fec9c8a8d33 WatchSource:0}: Error finding container 862b53ee0f444a58718498787ef5eed2515798403ef7030a59888fec9c8a8d33: Status 404 returned error can't find the container with id 862b53ee0f444a58718498787ef5eed2515798403ef7030a59888fec9c8a8d33 Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.169164 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c00d0688-2346-4f50-8a83-b31a45899ace","Type":"ContainerStarted","Data":"862b53ee0f444a58718498787ef5eed2515798403ef7030a59888fec9c8a8d33"} Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.226224 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70d6e04-a01e-4213-83b3-b986177730f1" path="/var/lib/kubelet/pods/f70d6e04-a01e-4213-83b3-b986177730f1/volumes" Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.439438 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.439811 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-central-agent" containerID="cri-o://206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382" gracePeriod=30 Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.439885 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="sg-core" containerID="cri-o://9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3" gracePeriod=30 Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.439987 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-notification-agent" containerID="cri-o://57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c" gracePeriod=30 Feb 18 19:40:46 crc kubenswrapper[4754]: I0218 19:40:46.440280 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="proxy-httpd" containerID="cri-o://a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae" gracePeriod=30 Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.177517 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerID="a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae" exitCode=0 Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.177941 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerID="9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3" exitCode=2 Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.177953 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerID="206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382" exitCode=0 Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.177872 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerDied","Data":"a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae"} Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.178017 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerDied","Data":"9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3"} Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.178030 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerDied","Data":"206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382"} Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.179622 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c00d0688-2346-4f50-8a83-b31a45899ace","Type":"ContainerStarted","Data":"2c2a37e382854230a05021c3732d1f56cdb9bfd1a720b87aee9fdf5d28af7314"} Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.180095 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:40:47 crc kubenswrapper[4754]: I0218 19:40:47.206194 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.686789599 podStartE2EDuration="2.206168546s" podCreationTimestamp="2026-02-18 19:40:45 +0000 UTC" firstStartedPulling="2026-02-18 19:40:46.110418212 +0000 UTC m=+1348.560831018" lastFinishedPulling="2026-02-18 19:40:46.629797149 +0000 UTC m=+1349.080209965" observedRunningTime="2026-02-18 19:40:47.198343006 +0000 UTC m=+1349.648755812" watchObservedRunningTime="2026-02-18 19:40:47.206168546 +0000 UTC m=+1349.656581342" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.096984 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165310 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-config-data\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165404 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-scripts\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165459 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-sg-core-conf-yaml\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165499 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-log-httpd\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165641 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-combined-ca-bundle\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165710 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-run-httpd\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.165782 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjrjc\" (UniqueName: \"kubernetes.io/projected/8c101acb-e6ac-4e92-8a34-8641c837b9c7-kube-api-access-kjrjc\") pod \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\" (UID: \"8c101acb-e6ac-4e92-8a34-8641c837b9c7\") " Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.166697 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.166837 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.172482 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-scripts" (OuterVolumeSpecName: "scripts") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.186420 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c101acb-e6ac-4e92-8a34-8641c837b9c7-kube-api-access-kjrjc" (OuterVolumeSpecName: "kube-api-access-kjrjc") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "kube-api-access-kjrjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.231891 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.268315 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.268344 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.268357 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.268366 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c101acb-e6ac-4e92-8a34-8641c837b9c7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.268379 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjrjc\" (UniqueName: \"kubernetes.io/projected/8c101acb-e6ac-4e92-8a34-8641c837b9c7-kube-api-access-kjrjc\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.277194 4754 generic.go:334] "Generic (PLEG): container finished" podID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerID="57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c" exitCode=0 Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.277292 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.298244 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.318800 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerDied","Data":"57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c"} Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.318863 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c101acb-e6ac-4e92-8a34-8641c837b9c7","Type":"ContainerDied","Data":"f033ce13e4d5fc6bab63c39ed3f8e84d144dbec32c8891cf839dfe1d34e5297c"} Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.318884 4754 scope.go:117] "RemoveContainer" containerID="a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.320792 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-config-data" (OuterVolumeSpecName: "config-data") pod "8c101acb-e6ac-4e92-8a34-8641c837b9c7" (UID: "8c101acb-e6ac-4e92-8a34-8641c837b9c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.363100 4754 scope.go:117] "RemoveContainer" containerID="9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.371696 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.371747 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c101acb-e6ac-4e92-8a34-8641c837b9c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.389252 4754 scope.go:117] "RemoveContainer" containerID="57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.449913 4754 scope.go:117] "RemoveContainer" containerID="206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.473597 4754 scope.go:117] "RemoveContainer" containerID="a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.474099 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae\": container with ID starting with a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae not found: ID does not exist" containerID="a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.474172 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae"} err="failed to get container status \"a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae\": rpc error: code = NotFound desc = could not find container \"a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae\": container with ID starting with a454bf3453cf2d1a1963e1099f0e5fd3c21490d5a80d71c9c4f6c3f6401b9eae not found: ID does not exist" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.474206 4754 scope.go:117] "RemoveContainer" containerID="9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.476633 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3\": container with ID starting with 9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3 not found: ID does not exist" containerID="9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.476698 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3"} err="failed to get container status \"9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3\": rpc error: code = NotFound desc = could not find container \"9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3\": container with ID starting with 9f8bb8c3c2809633fb21d18cbe10084a027a2a869009cc22e868d93f88fcaef3 not found: ID does not exist" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.476755 4754 scope.go:117] "RemoveContainer" containerID="57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.477404 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c\": container with ID starting with 57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c not found: ID does not exist" containerID="57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.477458 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c"} err="failed to get container status \"57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c\": rpc error: code = NotFound desc = could not find container \"57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c\": container with ID starting with 57170c24881f11cec7d70328d93cc6582762490a3a7df71c8d4533bb45fa5b6c not found: ID does not exist" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.477502 4754 scope.go:117] "RemoveContainer" containerID="206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.477992 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382\": container with ID starting with 206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382 not found: ID does not exist" containerID="206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.478036 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382"} err="failed to get container status \"206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382\": rpc error: code = NotFound desc = could not find container \"206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382\": container with ID starting with 206bd0162aab9a2eb0f20d47ba90b6aade689747f57f3db22b966ddf91449382 not found: ID does not exist" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.615456 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.623822 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.640472 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.641055 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="sg-core" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641090 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="sg-core" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.641105 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-notification-agent" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641113 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-notification-agent" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.641128 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="proxy-httpd" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641134 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="proxy-httpd" Feb 18 19:40:50 crc kubenswrapper[4754]: E0218 19:40:50.641186 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-central-agent" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641192 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-central-agent" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641364 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-central-agent" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641383 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="proxy-httpd" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641394 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="sg-core" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.641405 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" containerName="ceilometer-notification-agent" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.643210 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.663275 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.663436 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.663586 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.666996 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.676875 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.676933 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-scripts\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.676979 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-log-httpd\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.677030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7j4\" (UniqueName: \"kubernetes.io/projected/02a0bd74-4072-4689-bfdc-4ab76f0b9462-kube-api-access-hk7j4\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.677136 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-config-data\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.677317 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.677347 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-run-httpd\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.677394 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.779180 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.779290 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.779322 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-scripts\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.779362 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-log-httpd\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.779415 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk7j4\" (UniqueName: \"kubernetes.io/projected/02a0bd74-4072-4689-bfdc-4ab76f0b9462-kube-api-access-hk7j4\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.780032 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-log-httpd\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.780239 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-config-data\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.780360 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.780392 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-run-httpd\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.780751 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-run-httpd\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.783710 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.783741 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.784398 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.784862 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-scripts\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.785058 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-config-data\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.796800 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk7j4\" (UniqueName: \"kubernetes.io/projected/02a0bd74-4072-4689-bfdc-4ab76f0b9462-kube-api-access-hk7j4\") pod \"ceilometer-0\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " pod="openstack/ceilometer-0" Feb 18 19:40:50 crc kubenswrapper[4754]: I0218 19:40:50.968177 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:40:51 crc kubenswrapper[4754]: I0218 19:40:51.459633 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:40:51 crc kubenswrapper[4754]: W0218 19:40:51.460623 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a0bd74_4072_4689_bfdc_4ab76f0b9462.slice/crio-b45086faae92e8a47a3f2cc726b57af275c6ec358a55d4e566f26e7b3d1fddfc WatchSource:0}: Error finding container b45086faae92e8a47a3f2cc726b57af275c6ec358a55d4e566f26e7b3d1fddfc: Status 404 returned error can't find the container with id b45086faae92e8a47a3f2cc726b57af275c6ec358a55d4e566f26e7b3d1fddfc Feb 18 19:40:52 crc kubenswrapper[4754]: I0218 19:40:52.230071 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c101acb-e6ac-4e92-8a34-8641c837b9c7" path="/var/lib/kubelet/pods/8c101acb-e6ac-4e92-8a34-8641c837b9c7/volumes" Feb 18 19:40:52 crc kubenswrapper[4754]: I0218 19:40:52.308052 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerStarted","Data":"d8a94db86100dad8d17a427b76a53017994bb39eba38cca08874d753f4bbe70c"} Feb 18 19:40:52 crc kubenswrapper[4754]: I0218 19:40:52.308118 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerStarted","Data":"b45086faae92e8a47a3f2cc726b57af275c6ec358a55d4e566f26e7b3d1fddfc"} Feb 18 19:40:53 crc kubenswrapper[4754]: I0218 19:40:53.322014 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerStarted","Data":"f22ecaf0fd7cd4d815fe74705bd073b0ca6c1f6fa1d163d88cb5e7daa7f5c90e"} Feb 18 19:40:53 crc kubenswrapper[4754]: I0218 19:40:53.557004 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.051237 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8n2ll"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.053218 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.055408 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.056725 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.082208 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8n2ll"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.146477 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgn5\" (UniqueName: \"kubernetes.io/projected/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-kube-api-access-nbgn5\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.146722 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-config-data\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.146822 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-scripts\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.146843 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.243741 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.246939 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.249942 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjk24\" (UniqueName: \"kubernetes.io/projected/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-kube-api-access-jjk24\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250046 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-logs\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250159 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbgn5\" (UniqueName: \"kubernetes.io/projected/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-kube-api-access-nbgn5\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250246 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250358 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-config-data\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250496 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-scripts\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250531 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.250575 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-config-data\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.251520 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.266275 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-scripts\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.289225 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.289776 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-config-data\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.338393 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.341171 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerStarted","Data":"4268fe83fc1d6f316b07d5a9a93f6b1a3ed5b0e299ede27385ac233ddf25f51f"} Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.358816 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.359825 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjk24\" (UniqueName: \"kubernetes.io/projected/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-kube-api-access-jjk24\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.359971 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-logs\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.360129 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.360314 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-config-data\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.360579 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.361544 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-logs\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.369958 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.373699 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-config-data\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.376076 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.379817 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbgn5\" (UniqueName: \"kubernetes.io/projected/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-kube-api-access-nbgn5\") pod \"nova-cell0-cell-mapping-8n2ll\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.380229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjk24\" (UniqueName: \"kubernetes.io/projected/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-kube-api-access-jjk24\") pod \"nova-api-0\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.391906 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.431855 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.433987 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.439029 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.452570 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.492301 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.547626 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.556483 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.560638 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.569019 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.569094 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr68r\" (UniqueName: \"kubernetes.io/projected/e730fc15-7e56-45fa-a275-aca5ca181835-kube-api-access-lr68r\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.575364 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.575536 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.575603 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-config-data\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.575840 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9sww\" (UniqueName: \"kubernetes.io/projected/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-kube-api-access-s9sww\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681385 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681467 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-config-data\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681504 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e52c00-b81c-42f0-9526-b0e014d46f6d-logs\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681530 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681581 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9sww\" (UniqueName: \"kubernetes.io/projected/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-kube-api-access-s9sww\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681636 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-config-data\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681659 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681676 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvp27\" (UniqueName: \"kubernetes.io/projected/f0e52c00-b81c-42f0-9526-b0e014d46f6d-kube-api-access-zvp27\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.681705 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr68r\" (UniqueName: \"kubernetes.io/projected/e730fc15-7e56-45fa-a275-aca5ca181835-kube-api-access-lr68r\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.685677 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.688863 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.699024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.699104 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.705887 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.706630 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr68r\" (UniqueName: \"kubernetes.io/projected/e730fc15-7e56-45fa-a275-aca5ca181835-kube-api-access-lr68r\") pod \"nova-cell1-novncproxy-0\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.711663 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9sww\" (UniqueName: \"kubernetes.io/projected/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-kube-api-access-s9sww\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.718803 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-config-data\") pod \"nova-scheduler-0\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.751018 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-l2m2z"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.753577 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.785614 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-config-data\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.786012 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvp27\" (UniqueName: \"kubernetes.io/projected/f0e52c00-b81c-42f0-9526-b0e014d46f6d-kube-api-access-zvp27\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.786169 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e52c00-b81c-42f0-9526-b0e014d46f6d-logs\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.786216 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.800418 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e52c00-b81c-42f0-9526-b0e014d46f6d-logs\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.819407 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-l2m2z"] Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.820926 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-config-data\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.826469 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.834794 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.855526 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvp27\" (UniqueName: \"kubernetes.io/projected/f0e52c00-b81c-42f0-9526-b0e014d46f6d-kube-api-access-zvp27\") pod \"nova-metadata-0\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " pod="openstack/nova-metadata-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.890751 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-config\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.890901 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.891039 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.891099 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69lj4\" (UniqueName: \"kubernetes.io/projected/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-kube-api-access-69lj4\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.891168 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-svc\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.891225 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.937643 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.993386 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-svc\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.993453 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.993505 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-config\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.993562 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.993623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.993654 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69lj4\" (UniqueName: \"kubernetes.io/projected/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-kube-api-access-69lj4\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.995057 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-svc\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.995644 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:54 crc kubenswrapper[4754]: I0218 19:40:54.996786 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:54.997901 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:54.999658 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.003727 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-config\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.028580 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69lj4\" (UniqueName: \"kubernetes.io/projected/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-kube-api-access-69lj4\") pod \"dnsmasq-dns-bccf8f775-l2m2z\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.170691 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.425134 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.630641 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.683215 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.694066 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8n2ll"] Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.822493 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:40:55 crc kubenswrapper[4754]: W0218 19:40:55.825742 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0e52c00_b81c_42f0_9526_b0e014d46f6d.slice/crio-d61f23d6c759f20b4f9c86037345311661effe135e8604744a0e0a623b672437 WatchSource:0}: Error finding container d61f23d6c759f20b4f9c86037345311661effe135e8604744a0e0a623b672437: Status 404 returned error can't find the container with id d61f23d6c759f20b4f9c86037345311661effe135e8604744a0e0a623b672437 Feb 18 19:40:55 crc kubenswrapper[4754]: W0218 19:40:55.827355 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode730fc15_7e56_45fa_a275_aca5ca181835.slice/crio-c3bfc763191b93e95560e94f129406074619ad44ff97e189e8eba044152a938e WatchSource:0}: Error finding container c3bfc763191b93e95560e94f129406074619ad44ff97e189e8eba044152a938e: Status 404 returned error can't find the container with id c3bfc763191b93e95560e94f129406074619ad44ff97e189e8eba044152a938e Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.840440 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:40:55 crc kubenswrapper[4754]: I0218 19:40:55.968023 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-l2m2z"] Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.155996 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m5f2"] Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.158363 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.163886 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.164248 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.228161 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m5f2"] Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.243589 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-config-data\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.243647 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.243705 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-scripts\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.243750 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cfcg\" (UniqueName: \"kubernetes.io/projected/0498eea3-d1f4-43dd-82a3-4e98065a9fda-kube-api-access-6cfcg\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.345709 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cfcg\" (UniqueName: \"kubernetes.io/projected/0498eea3-d1f4-43dd-82a3-4e98065a9fda-kube-api-access-6cfcg\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.345872 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-config-data\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.345927 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.345994 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-scripts\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.351567 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-config-data\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.356491 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.366864 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cfcg\" (UniqueName: \"kubernetes.io/projected/0498eea3-d1f4-43dd-82a3-4e98065a9fda-kube-api-access-6cfcg\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.367772 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-scripts\") pod \"nova-cell1-conductor-db-sync-7m5f2\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.389640 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e52c00-b81c-42f0-9526-b0e014d46f6d","Type":"ContainerStarted","Data":"d61f23d6c759f20b4f9c86037345311661effe135e8604744a0e0a623b672437"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.391842 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1","Type":"ContainerStarted","Data":"c6f9a862bc1aa9d7780458fc6ad69b3a70543e1fff9b8db72a1ab49117640d08"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.396396 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8n2ll" event={"ID":"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1","Type":"ContainerStarted","Data":"c186b5c3527227838597ba94acfc66a3d49a4c83f60237e792c7cc69688329f0"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.396486 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8n2ll" event={"ID":"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1","Type":"ContainerStarted","Data":"9aef00d2e5c616e92b64d0827cd7f957e710d6579c87917d04aa7b778db37494"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.425637 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8n2ll" podStartSLOduration=2.42561276 podStartE2EDuration="2.42561276s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:56.417822501 +0000 UTC m=+1358.868235317" watchObservedRunningTime="2026-02-18 19:40:56.42561276 +0000 UTC m=+1358.876025556" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.438082 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerStarted","Data":"4f49ce9a0b51f0538ada6341c439e752fb0c206fc0c59e92169b5c14f0c5b95e"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.440101 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.445440 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6","Type":"ContainerStarted","Data":"c1dbabe2a846b19b5b6315a736985d68f974296e6b985834b4651eabc670d4be"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.458811 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" event={"ID":"7cdd5974-a24b-43a9-80d7-ad4c982aacb0","Type":"ContainerStarted","Data":"c630ba63f36df94031a07d3b3d409002b844b4049ba59b1e225535f7ca652771"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.458861 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" event={"ID":"7cdd5974-a24b-43a9-80d7-ad4c982aacb0","Type":"ContainerStarted","Data":"6b9760c7e8a1d8ad8b755d601e9041d6c7262fff9a5efde3b8534343b42e9f16"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.483999 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e730fc15-7e56-45fa-a275-aca5ca181835","Type":"ContainerStarted","Data":"c3bfc763191b93e95560e94f129406074619ad44ff97e189e8eba044152a938e"} Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.489832 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.008233119 podStartE2EDuration="6.489796401s" podCreationTimestamp="2026-02-18 19:40:50 +0000 UTC" firstStartedPulling="2026-02-18 19:40:51.462939418 +0000 UTC m=+1353.913352214" lastFinishedPulling="2026-02-18 19:40:55.9445027 +0000 UTC m=+1358.394915496" observedRunningTime="2026-02-18 19:40:56.467130183 +0000 UTC m=+1358.917542979" watchObservedRunningTime="2026-02-18 19:40:56.489796401 +0000 UTC m=+1358.940209197" Feb 18 19:40:56 crc kubenswrapper[4754]: I0218 19:40:56.505104 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.357469 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m5f2"] Feb 18 19:40:57 crc kubenswrapper[4754]: W0218 19:40:57.378579 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0498eea3_d1f4_43dd_82a3_4e98065a9fda.slice/crio-ef693e98fb01e2d220bf89aed8f4771c5848331c69a0a6cc21b06fa2e8678a6c WatchSource:0}: Error finding container ef693e98fb01e2d220bf89aed8f4771c5848331c69a0a6cc21b06fa2e8678a6c: Status 404 returned error can't find the container with id ef693e98fb01e2d220bf89aed8f4771c5848331c69a0a6cc21b06fa2e8678a6c Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.514816 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" event={"ID":"0498eea3-d1f4-43dd-82a3-4e98065a9fda","Type":"ContainerStarted","Data":"ef693e98fb01e2d220bf89aed8f4771c5848331c69a0a6cc21b06fa2e8678a6c"} Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.519564 4754 generic.go:334] "Generic (PLEG): container finished" podID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerID="c630ba63f36df94031a07d3b3d409002b844b4049ba59b1e225535f7ca652771" exitCode=0 Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.519629 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" event={"ID":"7cdd5974-a24b-43a9-80d7-ad4c982aacb0","Type":"ContainerDied","Data":"c630ba63f36df94031a07d3b3d409002b844b4049ba59b1e225535f7ca652771"} Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.519700 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" event={"ID":"7cdd5974-a24b-43a9-80d7-ad4c982aacb0","Type":"ContainerStarted","Data":"71ce765678e395cfe5d0ee0509575d9d70938b963f44ab589e102fba2eb13a0e"} Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.519868 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:40:57 crc kubenswrapper[4754]: I0218 19:40:57.552633 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" podStartSLOduration=3.552609355 podStartE2EDuration="3.552609355s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:57.542101796 +0000 UTC m=+1359.992514592" watchObservedRunningTime="2026-02-18 19:40:57.552609355 +0000 UTC m=+1360.003022151" Feb 18 19:40:58 crc kubenswrapper[4754]: I0218 19:40:58.231775 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:40:58 crc kubenswrapper[4754]: I0218 19:40:58.239682 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:40:59 crc kubenswrapper[4754]: I0218 19:40:59.556603 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" event={"ID":"0498eea3-d1f4-43dd-82a3-4e98065a9fda","Type":"ContainerStarted","Data":"1bae5f121c7719d510033756925e87bd627c20c0a7ff8198303025c8b0b9a832"} Feb 18 19:40:59 crc kubenswrapper[4754]: I0218 19:40:59.589902 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" podStartSLOduration=3.589870072 podStartE2EDuration="3.589870072s" podCreationTimestamp="2026-02-18 19:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:40:59.577526408 +0000 UTC m=+1362.027939204" watchObservedRunningTime="2026-02-18 19:40:59.589870072 +0000 UTC m=+1362.040282878" Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.578127 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e730fc15-7e56-45fa-a275-aca5ca181835","Type":"ContainerStarted","Data":"e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86"} Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.578274 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e730fc15-7e56-45fa-a275-aca5ca181835" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86" gracePeriod=30 Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.581269 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e52c00-b81c-42f0-9526-b0e014d46f6d","Type":"ContainerStarted","Data":"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d"} Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.581303 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e52c00-b81c-42f0-9526-b0e014d46f6d","Type":"ContainerStarted","Data":"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399"} Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.582178 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-log" containerID="cri-o://72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399" gracePeriod=30 Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.582357 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-metadata" containerID="cri-o://1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d" gracePeriod=30 Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.594935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1","Type":"ContainerStarted","Data":"73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8"} Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.594985 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1","Type":"ContainerStarted","Data":"2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea"} Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.597486 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6","Type":"ContainerStarted","Data":"bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a"} Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.614863 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.251723693 podStartE2EDuration="7.614834486s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="2026-02-18 19:40:55.833417958 +0000 UTC m=+1358.283830754" lastFinishedPulling="2026-02-18 19:41:00.196528751 +0000 UTC m=+1362.646941547" observedRunningTime="2026-02-18 19:41:01.603472542 +0000 UTC m=+1364.053885358" watchObservedRunningTime="2026-02-18 19:41:01.614834486 +0000 UTC m=+1364.065247282" Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.634403 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.13794121 podStartE2EDuration="7.634370821s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="2026-02-18 19:40:55.702876352 +0000 UTC m=+1358.153289148" lastFinishedPulling="2026-02-18 19:41:00.199305953 +0000 UTC m=+1362.649718759" observedRunningTime="2026-02-18 19:41:01.623239224 +0000 UTC m=+1364.073652030" watchObservedRunningTime="2026-02-18 19:41:01.634370821 +0000 UTC m=+1364.084783617" Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.655915 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.896936784 podStartE2EDuration="7.655880646s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="2026-02-18 19:40:55.439582647 +0000 UTC m=+1357.889995443" lastFinishedPulling="2026-02-18 19:41:00.198526509 +0000 UTC m=+1362.648939305" observedRunningTime="2026-02-18 19:41:01.642170382 +0000 UTC m=+1364.092583188" watchObservedRunningTime="2026-02-18 19:41:01.655880646 +0000 UTC m=+1364.106293442" Feb 18 19:41:01 crc kubenswrapper[4754]: I0218 19:41:01.668533 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.299506811 podStartE2EDuration="7.668504867s" podCreationTimestamp="2026-02-18 19:40:54 +0000 UTC" firstStartedPulling="2026-02-18 19:40:55.833376427 +0000 UTC m=+1358.283789223" lastFinishedPulling="2026-02-18 19:41:00.202374483 +0000 UTC m=+1362.652787279" observedRunningTime="2026-02-18 19:41:01.662772418 +0000 UTC m=+1364.113185214" watchObservedRunningTime="2026-02-18 19:41:01.668504867 +0000 UTC m=+1364.118917663" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.267367 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.410856 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-config-data\") pod \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.410921 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e52c00-b81c-42f0-9526-b0e014d46f6d-logs\") pod \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.411028 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvp27\" (UniqueName: \"kubernetes.io/projected/f0e52c00-b81c-42f0-9526-b0e014d46f6d-kube-api-access-zvp27\") pod \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.411091 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-combined-ca-bundle\") pod \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\" (UID: \"f0e52c00-b81c-42f0-9526-b0e014d46f6d\") " Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.411724 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e52c00-b81c-42f0-9526-b0e014d46f6d-logs" (OuterVolumeSpecName: "logs") pod "f0e52c00-b81c-42f0-9526-b0e014d46f6d" (UID: "f0e52c00-b81c-42f0-9526-b0e014d46f6d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.421296 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e52c00-b81c-42f0-9526-b0e014d46f6d-kube-api-access-zvp27" (OuterVolumeSpecName: "kube-api-access-zvp27") pod "f0e52c00-b81c-42f0-9526-b0e014d46f6d" (UID: "f0e52c00-b81c-42f0-9526-b0e014d46f6d"). InnerVolumeSpecName "kube-api-access-zvp27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.453122 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-config-data" (OuterVolumeSpecName: "config-data") pod "f0e52c00-b81c-42f0-9526-b0e014d46f6d" (UID: "f0e52c00-b81c-42f0-9526-b0e014d46f6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.453211 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0e52c00-b81c-42f0-9526-b0e014d46f6d" (UID: "f0e52c00-b81c-42f0-9526-b0e014d46f6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.514031 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.514338 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e52c00-b81c-42f0-9526-b0e014d46f6d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.514352 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvp27\" (UniqueName: \"kubernetes.io/projected/f0e52c00-b81c-42f0-9526-b0e014d46f6d-kube-api-access-zvp27\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.514365 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e52c00-b81c-42f0-9526-b0e014d46f6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.623107 4754 generic.go:334] "Generic (PLEG): container finished" podID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerID="1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d" exitCode=0 Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.623178 4754 generic.go:334] "Generic (PLEG): container finished" podID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerID="72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399" exitCode=143 Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.625117 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.632895 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e52c00-b81c-42f0-9526-b0e014d46f6d","Type":"ContainerDied","Data":"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d"} Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.632976 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e52c00-b81c-42f0-9526-b0e014d46f6d","Type":"ContainerDied","Data":"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399"} Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.632996 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e52c00-b81c-42f0-9526-b0e014d46f6d","Type":"ContainerDied","Data":"d61f23d6c759f20b4f9c86037345311661effe135e8604744a0e0a623b672437"} Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.633268 4754 scope.go:117] "RemoveContainer" containerID="1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.668860 4754 scope.go:117] "RemoveContainer" containerID="72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.674758 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.702339 4754 scope.go:117] "RemoveContainer" containerID="1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d" Feb 18 19:41:02 crc kubenswrapper[4754]: E0218 19:41:02.703243 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d\": container with ID starting with 1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d not found: ID does not exist" containerID="1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.703298 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d"} err="failed to get container status \"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d\": rpc error: code = NotFound desc = could not find container \"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d\": container with ID starting with 1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d not found: ID does not exist" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.703332 4754 scope.go:117] "RemoveContainer" containerID="72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.703405 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:02 crc kubenswrapper[4754]: E0218 19:41:02.703907 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399\": container with ID starting with 72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399 not found: ID does not exist" containerID="72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.703961 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399"} err="failed to get container status \"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399\": rpc error: code = NotFound desc = could not find container \"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399\": container with ID starting with 72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399 not found: ID does not exist" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.703997 4754 scope.go:117] "RemoveContainer" containerID="1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.704359 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d"} err="failed to get container status \"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d\": rpc error: code = NotFound desc = could not find container \"1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d\": container with ID starting with 1a8d0e667bb1c3b6fa199ee4a86780194174fb60018f70339e83c102b760393d not found: ID does not exist" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.704386 4754 scope.go:117] "RemoveContainer" containerID="72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.705376 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399"} err="failed to get container status \"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399\": rpc error: code = NotFound desc = could not find container \"72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399\": container with ID starting with 72890e0a81acb171933624700b7007eef4563a868e3596e12e8cbcc846f80399 not found: ID does not exist" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.716927 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:02 crc kubenswrapper[4754]: E0218 19:41:02.717498 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-log" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.717528 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-log" Feb 18 19:41:02 crc kubenswrapper[4754]: E0218 19:41:02.717563 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-metadata" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.717570 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-metadata" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.717782 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-log" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.717807 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" containerName="nova-metadata-metadata" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.719610 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.723826 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.724560 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.726036 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.825944 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.826013 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-config-data\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.826037 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09236174-0a6c-4663-b15b-538529da5663-logs\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.826157 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlnff\" (UniqueName: \"kubernetes.io/projected/09236174-0a6c-4663-b15b-538529da5663-kube-api-access-nlnff\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.826239 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.928018 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.928468 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.928634 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-config-data\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.928772 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09236174-0a6c-4663-b15b-538529da5663-logs\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.929021 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlnff\" (UniqueName: \"kubernetes.io/projected/09236174-0a6c-4663-b15b-538529da5663-kube-api-access-nlnff\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.929338 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09236174-0a6c-4663-b15b-538529da5663-logs\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.933735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.937706 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-config-data\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.940087 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:02 crc kubenswrapper[4754]: I0218 19:41:02.950978 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlnff\" (UniqueName: \"kubernetes.io/projected/09236174-0a6c-4663-b15b-538529da5663-kube-api-access-nlnff\") pod \"nova-metadata-0\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " pod="openstack/nova-metadata-0" Feb 18 19:41:03 crc kubenswrapper[4754]: I0218 19:41:03.051046 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:03 crc kubenswrapper[4754]: I0218 19:41:03.559897 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:03 crc kubenswrapper[4754]: I0218 19:41:03.639032 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09236174-0a6c-4663-b15b-538529da5663","Type":"ContainerStarted","Data":"9e43074e1d4ee96c1ad20d2226d2d2b27f0f89389d6f2d4fbefa52aaaf9a0983"} Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.226999 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e52c00-b81c-42f0-9526-b0e014d46f6d" path="/var/lib/kubelet/pods/f0e52c00-b81c-42f0-9526-b0e014d46f6d/volumes" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.493644 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.493714 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.657920 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09236174-0a6c-4663-b15b-538529da5663","Type":"ContainerStarted","Data":"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123"} Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.658495 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09236174-0a6c-4663-b15b-538529da5663","Type":"ContainerStarted","Data":"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef"} Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.690282 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.690245481 podStartE2EDuration="2.690245481s" podCreationTimestamp="2026-02-18 19:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:04.677241789 +0000 UTC m=+1367.127654585" watchObservedRunningTime="2026-02-18 19:41:04.690245481 +0000 UTC m=+1367.140658287" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.835668 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.835715 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.873608 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:41:04 crc kubenswrapper[4754]: I0218 19:41:04.938824 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.173276 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.277288 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wbqqj"] Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.277705 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="dnsmasq-dns" containerID="cri-o://ab9048b237e67f20e5f121078b52c627d3989cccf4a25b57389019a14a236035" gracePeriod=10 Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.472127 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: connect: connection refused" Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.586285 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.586307 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.680215 4754 generic.go:334] "Generic (PLEG): container finished" podID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerID="ab9048b237e67f20e5f121078b52c627d3989cccf4a25b57389019a14a236035" exitCode=0 Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.680293 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" event={"ID":"64903172-1b19-4bf2-b44c-1635bf00ca14","Type":"ContainerDied","Data":"ab9048b237e67f20e5f121078b52c627d3989cccf4a25b57389019a14a236035"} Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.684072 4754 generic.go:334] "Generic (PLEG): container finished" podID="ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" containerID="c186b5c3527227838597ba94acfc66a3d49a4c83f60237e792c7cc69688329f0" exitCode=0 Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.684663 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8n2ll" event={"ID":"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1","Type":"ContainerDied","Data":"c186b5c3527227838597ba94acfc66a3d49a4c83f60237e792c7cc69688329f0"} Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.735448 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:41:05 crc kubenswrapper[4754]: I0218 19:41:05.881880 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.011409 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l7t2\" (UniqueName: \"kubernetes.io/projected/64903172-1b19-4bf2-b44c-1635bf00ca14-kube-api-access-9l7t2\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.012516 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-svc\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.012584 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-sb\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.012653 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.012726 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-config\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.012822 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-nb\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.051702 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64903172-1b19-4bf2-b44c-1635bf00ca14-kube-api-access-9l7t2" (OuterVolumeSpecName: "kube-api-access-9l7t2") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "kube-api-access-9l7t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.090595 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-config" (OuterVolumeSpecName: "config") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.100161 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.101587 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.109513 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.114212 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.114458 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0\") pod \"64903172-1b19-4bf2-b44c-1635bf00ca14\" (UID: \"64903172-1b19-4bf2-b44c-1635bf00ca14\") " Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.115302 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.115319 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.115332 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l7t2\" (UniqueName: \"kubernetes.io/projected/64903172-1b19-4bf2-b44c-1635bf00ca14-kube-api-access-9l7t2\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.115340 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.115348 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:06 crc kubenswrapper[4754]: W0218 19:41:06.115430 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/64903172-1b19-4bf2-b44c-1635bf00ca14/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.115442 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64903172-1b19-4bf2-b44c-1635bf00ca14" (UID: "64903172-1b19-4bf2-b44c-1635bf00ca14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.217561 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64903172-1b19-4bf2-b44c-1635bf00ca14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.697706 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" event={"ID":"64903172-1b19-4bf2-b44c-1635bf00ca14","Type":"ContainerDied","Data":"745da8de126ae7f90459ef5ba06f450d43f91078b1451391b68dacb5178b9792"} Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.697726 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-wbqqj" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.697778 4754 scope.go:117] "RemoveContainer" containerID="ab9048b237e67f20e5f121078b52c627d3989cccf4a25b57389019a14a236035" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.701229 4754 generic.go:334] "Generic (PLEG): container finished" podID="0498eea3-d1f4-43dd-82a3-4e98065a9fda" containerID="1bae5f121c7719d510033756925e87bd627c20c0a7ff8198303025c8b0b9a832" exitCode=0 Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.701603 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" event={"ID":"0498eea3-d1f4-43dd-82a3-4e98065a9fda","Type":"ContainerDied","Data":"1bae5f121c7719d510033756925e87bd627c20c0a7ff8198303025c8b0b9a832"} Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.754337 4754 scope.go:117] "RemoveContainer" containerID="f43642ba27f947fc7a2cbf49477a3a559e570c93cad57c29052cc49d92f95766" Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.763367 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wbqqj"] Feb 18 19:41:06 crc kubenswrapper[4754]: I0218 19:41:06.775803 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-wbqqj"] Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.125929 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.243532 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbgn5\" (UniqueName: \"kubernetes.io/projected/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-kube-api-access-nbgn5\") pod \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.243827 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-config-data\") pod \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.243951 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-scripts\") pod \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.244000 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-combined-ca-bundle\") pod \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\" (UID: \"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1\") " Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.251586 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-kube-api-access-nbgn5" (OuterVolumeSpecName: "kube-api-access-nbgn5") pod "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" (UID: "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1"). InnerVolumeSpecName "kube-api-access-nbgn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.254629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-scripts" (OuterVolumeSpecName: "scripts") pod "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" (UID: "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.277346 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" (UID: "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.284198 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-config-data" (OuterVolumeSpecName: "config-data") pod "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" (UID: "ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.349095 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.349120 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.349131 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbgn5\" (UniqueName: \"kubernetes.io/projected/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-kube-api-access-nbgn5\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.349381 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.714327 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8n2ll" event={"ID":"ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1","Type":"ContainerDied","Data":"9aef00d2e5c616e92b64d0827cd7f957e710d6579c87917d04aa7b778db37494"} Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.715382 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aef00d2e5c616e92b64d0827cd7f957e710d6579c87917d04aa7b778db37494" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.714437 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8n2ll" Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.958487 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.959611 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-api" containerID="cri-o://73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8" gracePeriod=30 Feb 18 19:41:07 crc kubenswrapper[4754]: I0218 19:41:07.960404 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-log" containerID="cri-o://2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea" gracePeriod=30 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.017297 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.017555 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" containerName="nova-scheduler-scheduler" containerID="cri-o://bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a" gracePeriod=30 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.034761 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.035050 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-log" containerID="cri-o://0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef" gracePeriod=30 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.035187 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-metadata" containerID="cri-o://d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123" gracePeriod=30 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.057368 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.057452 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.223963 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" path="/var/lib/kubelet/pods/64903172-1b19-4bf2-b44c-1635bf00ca14/volumes" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.323915 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.370631 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cfcg\" (UniqueName: \"kubernetes.io/projected/0498eea3-d1f4-43dd-82a3-4e98065a9fda-kube-api-access-6cfcg\") pod \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.370830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-config-data\") pod \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.370910 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-scripts\") pod \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.370967 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-combined-ca-bundle\") pod \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\" (UID: \"0498eea3-d1f4-43dd-82a3-4e98065a9fda\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.375682 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-scripts" (OuterVolumeSpecName: "scripts") pod "0498eea3-d1f4-43dd-82a3-4e98065a9fda" (UID: "0498eea3-d1f4-43dd-82a3-4e98065a9fda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.375758 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0498eea3-d1f4-43dd-82a3-4e98065a9fda-kube-api-access-6cfcg" (OuterVolumeSpecName: "kube-api-access-6cfcg") pod "0498eea3-d1f4-43dd-82a3-4e98065a9fda" (UID: "0498eea3-d1f4-43dd-82a3-4e98065a9fda"). InnerVolumeSpecName "kube-api-access-6cfcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.403680 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-config-data" (OuterVolumeSpecName: "config-data") pod "0498eea3-d1f4-43dd-82a3-4e98065a9fda" (UID: "0498eea3-d1f4-43dd-82a3-4e98065a9fda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.407512 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0498eea3-d1f4-43dd-82a3-4e98065a9fda" (UID: "0498eea3-d1f4-43dd-82a3-4e98065a9fda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.474713 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.474779 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.474789 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0498eea3-d1f4-43dd-82a3-4e98065a9fda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.474800 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cfcg\" (UniqueName: \"kubernetes.io/projected/0498eea3-d1f4-43dd-82a3-4e98065a9fda-kube-api-access-6cfcg\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.577644 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.678492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-nova-metadata-tls-certs\") pod \"09236174-0a6c-4663-b15b-538529da5663\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.678626 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09236174-0a6c-4663-b15b-538529da5663-logs\") pod \"09236174-0a6c-4663-b15b-538529da5663\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.678650 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-combined-ca-bundle\") pod \"09236174-0a6c-4663-b15b-538529da5663\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.678942 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09236174-0a6c-4663-b15b-538529da5663-logs" (OuterVolumeSpecName: "logs") pod "09236174-0a6c-4663-b15b-538529da5663" (UID: "09236174-0a6c-4663-b15b-538529da5663"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.679013 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-config-data\") pod \"09236174-0a6c-4663-b15b-538529da5663\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.679329 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlnff\" (UniqueName: \"kubernetes.io/projected/09236174-0a6c-4663-b15b-538529da5663-kube-api-access-nlnff\") pod \"09236174-0a6c-4663-b15b-538529da5663\" (UID: \"09236174-0a6c-4663-b15b-538529da5663\") " Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.679819 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09236174-0a6c-4663-b15b-538529da5663-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.683603 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09236174-0a6c-4663-b15b-538529da5663-kube-api-access-nlnff" (OuterVolumeSpecName: "kube-api-access-nlnff") pod "09236174-0a6c-4663-b15b-538529da5663" (UID: "09236174-0a6c-4663-b15b-538529da5663"). InnerVolumeSpecName "kube-api-access-nlnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.710236 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-config-data" (OuterVolumeSpecName: "config-data") pod "09236174-0a6c-4663-b15b-538529da5663" (UID: "09236174-0a6c-4663-b15b-538529da5663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.710699 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09236174-0a6c-4663-b15b-538529da5663" (UID: "09236174-0a6c-4663-b15b-538529da5663"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.754758 4754 generic.go:334] "Generic (PLEG): container finished" podID="09236174-0a6c-4663-b15b-538529da5663" containerID="d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123" exitCode=0 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.754804 4754 generic.go:334] "Generic (PLEG): container finished" podID="09236174-0a6c-4663-b15b-538529da5663" containerID="0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef" exitCode=143 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.754874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09236174-0a6c-4663-b15b-538529da5663","Type":"ContainerDied","Data":"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123"} Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.754915 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09236174-0a6c-4663-b15b-538529da5663","Type":"ContainerDied","Data":"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef"} Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.754930 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"09236174-0a6c-4663-b15b-538529da5663","Type":"ContainerDied","Data":"9e43074e1d4ee96c1ad20d2226d2d2b27f0f89389d6f2d4fbefa52aaaf9a0983"} Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.754949 4754 scope.go:117] "RemoveContainer" containerID="d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.755184 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.773783 4754 generic.go:334] "Generic (PLEG): container finished" podID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerID="2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea" exitCode=143 Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.773899 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1","Type":"ContainerDied","Data":"2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea"} Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.778388 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" event={"ID":"0498eea3-d1f4-43dd-82a3-4e98065a9fda","Type":"ContainerDied","Data":"ef693e98fb01e2d220bf89aed8f4771c5848331c69a0a6cc21b06fa2e8678a6c"} Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.778535 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef693e98fb01e2d220bf89aed8f4771c5848331c69a0a6cc21b06fa2e8678a6c" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.778875 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7m5f2" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.787812 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "09236174-0a6c-4663-b15b-538529da5663" (UID: "09236174-0a6c-4663-b15b-538529da5663"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.790645 4754 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.790871 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.790961 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09236174-0a6c-4663-b15b-538529da5663-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.791068 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlnff\" (UniqueName: \"kubernetes.io/projected/09236174-0a6c-4663-b15b-538529da5663-kube-api-access-nlnff\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.818393 4754 scope.go:117] "RemoveContainer" containerID="0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.826475 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.826918 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="dnsmasq-dns" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.826936 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="dnsmasq-dns" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.826962 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-metadata" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.826968 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-metadata" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.826983 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-log" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.826989 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-log" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.826998 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="init" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827004 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="init" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.827014 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" containerName="nova-manage" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827022 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" containerName="nova-manage" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.827039 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0498eea3-d1f4-43dd-82a3-4e98065a9fda" containerName="nova-cell1-conductor-db-sync" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827047 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0498eea3-d1f4-43dd-82a3-4e98065a9fda" containerName="nova-cell1-conductor-db-sync" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827278 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-log" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827289 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0498eea3-d1f4-43dd-82a3-4e98065a9fda" containerName="nova-cell1-conductor-db-sync" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827301 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" containerName="nova-manage" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827311 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="64903172-1b19-4bf2-b44c-1635bf00ca14" containerName="dnsmasq-dns" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.827324 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="09236174-0a6c-4663-b15b-538529da5663" containerName="nova-metadata-metadata" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.829359 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.847127 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.847346 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.866824 4754 scope.go:117] "RemoveContainer" containerID="d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.867940 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123\": container with ID starting with d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123 not found: ID does not exist" containerID="d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.868001 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123"} err="failed to get container status \"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123\": rpc error: code = NotFound desc = could not find container \"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123\": container with ID starting with d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123 not found: ID does not exist" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.868040 4754 scope.go:117] "RemoveContainer" containerID="0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef" Feb 18 19:41:08 crc kubenswrapper[4754]: E0218 19:41:08.869897 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef\": container with ID starting with 0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef not found: ID does not exist" containerID="0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.869955 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef"} err="failed to get container status \"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef\": rpc error: code = NotFound desc = could not find container \"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef\": container with ID starting with 0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef not found: ID does not exist" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.869997 4754 scope.go:117] "RemoveContainer" containerID="d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.872183 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123"} err="failed to get container status \"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123\": rpc error: code = NotFound desc = could not find container \"d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123\": container with ID starting with d4ebb3227cd233e6533f6e8e2dd27570d0b7c6e993a229fc701cc8afb225a123 not found: ID does not exist" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.872225 4754 scope.go:117] "RemoveContainer" containerID="0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.872720 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef"} err="failed to get container status \"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef\": rpc error: code = NotFound desc = could not find container \"0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef\": container with ID starting with 0e32835b6d032ac67246d0c6f7651baaee755568372aabb4437b99288213faef not found: ID does not exist" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.893235 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.893528 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42jt\" (UniqueName: \"kubernetes.io/projected/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-kube-api-access-n42jt\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.893639 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.995120 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42jt\" (UniqueName: \"kubernetes.io/projected/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-kube-api-access-n42jt\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.995207 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:08 crc kubenswrapper[4754]: I0218 19:41:08.995291 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.002170 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.002364 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.019672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42jt\" (UniqueName: \"kubernetes.io/projected/8f2165ea-0c21-4fb1-98db-8a48b7abac4d-kube-api-access-n42jt\") pod \"nova-cell1-conductor-0\" (UID: \"8f2165ea-0c21-4fb1-98db-8a48b7abac4d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.168525 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.175599 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.184403 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.215607 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.217519 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.219641 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.219857 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.229633 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.302902 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-config-data\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.303054 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.303097 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.303200 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/254a62e3-ae00-4da7-8b56-8ed9f6580e17-logs\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.303227 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bszvl\" (UniqueName: \"kubernetes.io/projected/254a62e3-ae00-4da7-8b56-8ed9f6580e17-kube-api-access-bszvl\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.404652 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-config-data\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.404765 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.404799 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.404868 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/254a62e3-ae00-4da7-8b56-8ed9f6580e17-logs\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.404889 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bszvl\" (UniqueName: \"kubernetes.io/projected/254a62e3-ae00-4da7-8b56-8ed9f6580e17-kube-api-access-bszvl\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.406845 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/254a62e3-ae00-4da7-8b56-8ed9f6580e17-logs\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.409912 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.410792 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-config-data\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.411317 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.425653 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bszvl\" (UniqueName: \"kubernetes.io/projected/254a62e3-ae00-4da7-8b56-8ed9f6580e17-kube-api-access-bszvl\") pod \"nova-metadata-0\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.616892 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.665546 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:41:09 crc kubenswrapper[4754]: I0218 19:41:09.797090 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8f2165ea-0c21-4fb1-98db-8a48b7abac4d","Type":"ContainerStarted","Data":"b90293f254e6264404f59430b77cf7df5b41ae9e3aa4c77f9239a6a68c9ec3ce"} Feb 18 19:41:09 crc kubenswrapper[4754]: E0218 19:41:09.841939 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:41:09 crc kubenswrapper[4754]: E0218 19:41:09.843613 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:41:09 crc kubenswrapper[4754]: E0218 19:41:09.844822 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:41:09 crc kubenswrapper[4754]: E0218 19:41:09.844882 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" containerName="nova-scheduler-scheduler" Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.048498 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:10 crc kubenswrapper[4754]: W0218 19:41:10.052255 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod254a62e3_ae00_4da7_8b56_8ed9f6580e17.slice/crio-2fb8c9a74a5dcbca898f541fb4839003aff3c6bb1aafc94987ee7c64d75b7a44 WatchSource:0}: Error finding container 2fb8c9a74a5dcbca898f541fb4839003aff3c6bb1aafc94987ee7c64d75b7a44: Status 404 returned error can't find the container with id 2fb8c9a74a5dcbca898f541fb4839003aff3c6bb1aafc94987ee7c64d75b7a44 Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.237944 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09236174-0a6c-4663-b15b-538529da5663" path="/var/lib/kubelet/pods/09236174-0a6c-4663-b15b-538529da5663/volumes" Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.810272 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8f2165ea-0c21-4fb1-98db-8a48b7abac4d","Type":"ContainerStarted","Data":"ba1bb9d55d7c3e42f25bc0629e60ae41e6bca482c4b734518db266cce412da8c"} Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.813704 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.816099 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"254a62e3-ae00-4da7-8b56-8ed9f6580e17","Type":"ContainerStarted","Data":"4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1"} Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.816202 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"254a62e3-ae00-4da7-8b56-8ed9f6580e17","Type":"ContainerStarted","Data":"9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5"} Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.816219 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"254a62e3-ae00-4da7-8b56-8ed9f6580e17","Type":"ContainerStarted","Data":"2fb8c9a74a5dcbca898f541fb4839003aff3c6bb1aafc94987ee7c64d75b7a44"} Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.839663 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.839639575 podStartE2EDuration="2.839639575s" podCreationTimestamp="2026-02-18 19:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:10.836599661 +0000 UTC m=+1373.287012457" watchObservedRunningTime="2026-02-18 19:41:10.839639575 +0000 UTC m=+1373.290052361" Feb 18 19:41:10 crc kubenswrapper[4754]: I0218 19:41:10.871663 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.871628692 podStartE2EDuration="1.871628692s" podCreationTimestamp="2026-02-18 19:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:10.861323984 +0000 UTC m=+1373.311736780" watchObservedRunningTime="2026-02-18 19:41:10.871628692 +0000 UTC m=+1373.322041488" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.628631 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.773273 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-config-data\") pod \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.773686 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-combined-ca-bundle\") pod \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.773755 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjk24\" (UniqueName: \"kubernetes.io/projected/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-kube-api-access-jjk24\") pod \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.773913 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-logs\") pod \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\" (UID: \"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1\") " Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.774643 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-logs" (OuterVolumeSpecName: "logs") pod "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" (UID: "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.779788 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-kube-api-access-jjk24" (OuterVolumeSpecName: "kube-api-access-jjk24") pod "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" (UID: "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1"). InnerVolumeSpecName "kube-api-access-jjk24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.814196 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" (UID: "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.817293 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-config-data" (OuterVolumeSpecName: "config-data") pod "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" (UID: "be3d5c8c-4d4e-4049-bc2a-310d88b43bb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.828452 4754 generic.go:334] "Generic (PLEG): container finished" podID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerID="73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8" exitCode=0 Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.828504 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.828532 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1","Type":"ContainerDied","Data":"73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8"} Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.828571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be3d5c8c-4d4e-4049-bc2a-310d88b43bb1","Type":"ContainerDied","Data":"c6f9a862bc1aa9d7780458fc6ad69b3a70543e1fff9b8db72a1ab49117640d08"} Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.828621 4754 scope.go:117] "RemoveContainer" containerID="73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.877824 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.877868 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.877883 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjk24\" (UniqueName: \"kubernetes.io/projected/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-kube-api-access-jjk24\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.877897 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.913083 4754 scope.go:117] "RemoveContainer" containerID="2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.916896 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.935166 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.947883 4754 scope.go:117] "RemoveContainer" containerID="73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.948647 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:11 crc kubenswrapper[4754]: E0218 19:41:11.948861 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8\": container with ID starting with 73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8 not found: ID does not exist" containerID="73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.948932 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8"} err="failed to get container status \"73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8\": rpc error: code = NotFound desc = could not find container \"73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8\": container with ID starting with 73fb9818dc79f95906465809820c804004adeb0d4aec7f1cfdd62c6fa4558ea8 not found: ID does not exist" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.949005 4754 scope.go:117] "RemoveContainer" containerID="2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea" Feb 18 19:41:11 crc kubenswrapper[4754]: E0218 19:41:11.949373 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-log" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.949444 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-log" Feb 18 19:41:11 crc kubenswrapper[4754]: E0218 19:41:11.949525 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-api" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.949601 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-api" Feb 18 19:41:11 crc kubenswrapper[4754]: E0218 19:41:11.949676 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea\": container with ID starting with 2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea not found: ID does not exist" containerID="2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.949723 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea"} err="failed to get container status \"2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea\": rpc error: code = NotFound desc = could not find container \"2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea\": container with ID starting with 2a64d414e0b7addbe0db3945cead674a60c080bb005b7d0bc57cf9d72195a9ea not found: ID does not exist" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.950898 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-log" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.951020 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" containerName="nova-api-api" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.952630 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.957987 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:41:11 crc kubenswrapper[4754]: I0218 19:41:11.972168 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.082809 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6467l\" (UniqueName: \"kubernetes.io/projected/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-kube-api-access-6467l\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.082906 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-config-data\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.083030 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.083056 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-logs\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.185293 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.185354 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-logs\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.185535 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6467l\" (UniqueName: \"kubernetes.io/projected/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-kube-api-access-6467l\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.185594 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-config-data\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.186934 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-logs\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.203883 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.204521 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-config-data\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.205923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6467l\" (UniqueName: \"kubernetes.io/projected/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-kube-api-access-6467l\") pod \"nova-api-0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.223085 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3d5c8c-4d4e-4049-bc2a-310d88b43bb1" path="/var/lib/kubelet/pods/be3d5c8c-4d4e-4049-bc2a-310d88b43bb1/volumes" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.283030 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.772961 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:12 crc kubenswrapper[4754]: W0218 19:41:12.775716 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0e57eed_4c9b_41a5_95e9_01b18336b2c0.slice/crio-9ba5831f5ba7e136510d27c033e0326ddeef22607cac01d5c2382ab6666eff6d WatchSource:0}: Error finding container 9ba5831f5ba7e136510d27c033e0326ddeef22607cac01d5c2382ab6666eff6d: Status 404 returned error can't find the container with id 9ba5831f5ba7e136510d27c033e0326ddeef22607cac01d5c2382ab6666eff6d Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.846452 4754 generic.go:334] "Generic (PLEG): container finished" podID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" containerID="bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a" exitCode=0 Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.846630 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6","Type":"ContainerDied","Data":"bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a"} Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.849113 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0e57eed-4c9b-41a5-95e9-01b18336b2c0","Type":"ContainerStarted","Data":"9ba5831f5ba7e136510d27c033e0326ddeef22607cac01d5c2382ab6666eff6d"} Feb 18 19:41:12 crc kubenswrapper[4754]: I0218 19:41:12.934593 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.003324 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-combined-ca-bundle\") pod \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.003467 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-config-data\") pod \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.003526 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9sww\" (UniqueName: \"kubernetes.io/projected/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-kube-api-access-s9sww\") pod \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\" (UID: \"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6\") " Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.006578 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-kube-api-access-s9sww" (OuterVolumeSpecName: "kube-api-access-s9sww") pod "fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" (UID: "fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6"). InnerVolumeSpecName "kube-api-access-s9sww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.033033 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" (UID: "fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.041274 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-config-data" (OuterVolumeSpecName: "config-data") pod "fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" (UID: "fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.106909 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.106949 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.106958 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9sww\" (UniqueName: \"kubernetes.io/projected/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6-kube-api-access-s9sww\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.860761 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0e57eed-4c9b-41a5-95e9-01b18336b2c0","Type":"ContainerStarted","Data":"85024d116096cc5596cf26fd76008083a66c9d62387fd4b45881b8460a54c707"} Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.861165 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0e57eed-4c9b-41a5-95e9-01b18336b2c0","Type":"ContainerStarted","Data":"04a0522a0720f9a4c20dbc13a1475b7830bd1096227aca8dbebf3526407188de"} Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.862587 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6","Type":"ContainerDied","Data":"c1dbabe2a846b19b5b6315a736985d68f974296e6b985834b4651eabc670d4be"} Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.862635 4754 scope.go:117] "RemoveContainer" containerID="bbb135ff28ed9909d091b1ddc7fd10dad91debe48842f57853f0f6bffd75d24a" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.862662 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.905981 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9059435110000003 podStartE2EDuration="2.905943511s" podCreationTimestamp="2026-02-18 19:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:13.8935518 +0000 UTC m=+1376.343964596" watchObservedRunningTime="2026-02-18 19:41:13.905943511 +0000 UTC m=+1376.356356307" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.934547 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.952981 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.972458 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:13 crc kubenswrapper[4754]: E0218 19:41:13.973016 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" containerName="nova-scheduler-scheduler" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.973034 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" containerName="nova-scheduler-scheduler" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.973234 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" containerName="nova-scheduler-scheduler" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.973923 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.979021 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:13 crc kubenswrapper[4754]: I0218 19:41:13.979311 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.128444 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-config-data\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.128513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.128541 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxvkb\" (UniqueName: \"kubernetes.io/projected/40656c76-f405-4d8a-8b29-384ccae5068b-kube-api-access-bxvkb\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.206247 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.229921 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6" path="/var/lib/kubelet/pods/fc2f85b4-158f-47ce-b12f-d5e2a55fc2c6/volumes" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.231672 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-config-data\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.234653 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.234712 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxvkb\" (UniqueName: \"kubernetes.io/projected/40656c76-f405-4d8a-8b29-384ccae5068b-kube-api-access-bxvkb\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.237552 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-config-data\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.249132 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.259135 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxvkb\" (UniqueName: \"kubernetes.io/projected/40656c76-f405-4d8a-8b29-384ccae5068b-kube-api-access-bxvkb\") pod \"nova-scheduler-0\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.298074 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.617086 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.617400 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:41:14 crc kubenswrapper[4754]: W0218 19:41:14.766537 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40656c76_f405_4d8a_8b29_384ccae5068b.slice/crio-49e369ee208bdf218c4242cabbfb6f3efac7a9ae75f23d3f696ccffcc0728fa9 WatchSource:0}: Error finding container 49e369ee208bdf218c4242cabbfb6f3efac7a9ae75f23d3f696ccffcc0728fa9: Status 404 returned error can't find the container with id 49e369ee208bdf218c4242cabbfb6f3efac7a9ae75f23d3f696ccffcc0728fa9 Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.771575 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:14 crc kubenswrapper[4754]: I0218 19:41:14.879807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40656c76-f405-4d8a-8b29-384ccae5068b","Type":"ContainerStarted","Data":"49e369ee208bdf218c4242cabbfb6f3efac7a9ae75f23d3f696ccffcc0728fa9"} Feb 18 19:41:15 crc kubenswrapper[4754]: I0218 19:41:15.894258 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40656c76-f405-4d8a-8b29-384ccae5068b","Type":"ContainerStarted","Data":"4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783"} Feb 18 19:41:15 crc kubenswrapper[4754]: I0218 19:41:15.914236 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.91419673 podStartE2EDuration="2.91419673s" podCreationTimestamp="2026-02-18 19:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:15.912327822 +0000 UTC m=+1378.362740678" watchObservedRunningTime="2026-02-18 19:41:15.91419673 +0000 UTC m=+1378.364609566" Feb 18 19:41:19 crc kubenswrapper[4754]: I0218 19:41:19.299581 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:41:19 crc kubenswrapper[4754]: I0218 19:41:19.618009 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:41:19 crc kubenswrapper[4754]: I0218 19:41:19.618432 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:41:20 crc kubenswrapper[4754]: I0218 19:41:20.631358 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:20 crc kubenswrapper[4754]: I0218 19:41:20.631431 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:20 crc kubenswrapper[4754]: I0218 19:41:20.986816 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:41:22 crc kubenswrapper[4754]: I0218 19:41:22.284497 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:41:22 crc kubenswrapper[4754]: I0218 19:41:22.286622 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:41:23 crc kubenswrapper[4754]: I0218 19:41:23.366337 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:23 crc kubenswrapper[4754]: I0218 19:41:23.366433 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:24 crc kubenswrapper[4754]: I0218 19:41:24.299262 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:41:24 crc kubenswrapper[4754]: I0218 19:41:24.329443 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:41:25 crc kubenswrapper[4754]: I0218 19:41:25.076185 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:41:29 crc kubenswrapper[4754]: I0218 19:41:29.626163 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:41:29 crc kubenswrapper[4754]: I0218 19:41:29.628776 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:41:29 crc kubenswrapper[4754]: I0218 19:41:29.635213 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:41:30 crc kubenswrapper[4754]: I0218 19:41:30.087117 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:41:31 crc kubenswrapper[4754]: E0218 19:41:31.907889 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode730fc15_7e56_45fa_a275_aca5ca181835.slice/crio-conmon-e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.052329 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.111645 4754 generic.go:334] "Generic (PLEG): container finished" podID="e730fc15-7e56-45fa-a275-aca5ca181835" containerID="e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86" exitCode=137 Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.111736 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e730fc15-7e56-45fa-a275-aca5ca181835","Type":"ContainerDied","Data":"e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86"} Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.111742 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.111807 4754 scope.go:117] "RemoveContainer" containerID="e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.111792 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e730fc15-7e56-45fa-a275-aca5ca181835","Type":"ContainerDied","Data":"c3bfc763191b93e95560e94f129406074619ad44ff97e189e8eba044152a938e"} Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.137127 4754 scope.go:117] "RemoveContainer" containerID="e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86" Feb 18 19:41:32 crc kubenswrapper[4754]: E0218 19:41:32.138340 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86\": container with ID starting with e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86 not found: ID does not exist" containerID="e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.138394 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86"} err="failed to get container status \"e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86\": rpc error: code = NotFound desc = could not find container \"e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86\": container with ID starting with e8856cbcb3f83126a07b4c9c5639cce779747b8ac46cebdfc802b38ea2d94f86 not found: ID does not exist" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.191505 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-combined-ca-bundle\") pod \"e730fc15-7e56-45fa-a275-aca5ca181835\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.191595 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-config-data\") pod \"e730fc15-7e56-45fa-a275-aca5ca181835\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.191695 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr68r\" (UniqueName: \"kubernetes.io/projected/e730fc15-7e56-45fa-a275-aca5ca181835-kube-api-access-lr68r\") pod \"e730fc15-7e56-45fa-a275-aca5ca181835\" (UID: \"e730fc15-7e56-45fa-a275-aca5ca181835\") " Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.197475 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e730fc15-7e56-45fa-a275-aca5ca181835-kube-api-access-lr68r" (OuterVolumeSpecName: "kube-api-access-lr68r") pod "e730fc15-7e56-45fa-a275-aca5ca181835" (UID: "e730fc15-7e56-45fa-a275-aca5ca181835"). InnerVolumeSpecName "kube-api-access-lr68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.222125 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-config-data" (OuterVolumeSpecName: "config-data") pod "e730fc15-7e56-45fa-a275-aca5ca181835" (UID: "e730fc15-7e56-45fa-a275-aca5ca181835"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.227865 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e730fc15-7e56-45fa-a275-aca5ca181835" (UID: "e730fc15-7e56-45fa-a275-aca5ca181835"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.296463 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.296511 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e730fc15-7e56-45fa-a275-aca5ca181835-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.296525 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr68r\" (UniqueName: \"kubernetes.io/projected/e730fc15-7e56-45fa-a275-aca5ca181835-kube-api-access-lr68r\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.301646 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.301732 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.303810 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.303856 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.309456 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.309560 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.454311 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.474299 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.498209 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:41:32 crc kubenswrapper[4754]: E0218 19:41:32.498846 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e730fc15-7e56-45fa-a275-aca5ca181835" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.498878 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e730fc15-7e56-45fa-a275-aca5ca181835" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.499123 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e730fc15-7e56-45fa-a275-aca5ca181835" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.500099 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.510921 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.511211 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.511369 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.522168 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kp4c9"] Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.523991 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.575471 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.603606 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kp4c9"] Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612649 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612727 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612752 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdgn\" (UniqueName: \"kubernetes.io/projected/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-kube-api-access-vwdgn\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612768 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612817 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8r84\" (UniqueName: \"kubernetes.io/projected/619f34f1-2021-4714-b349-e2422b306b64-kube-api-access-b8r84\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612837 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612862 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612883 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612919 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-config\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612940 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.612995 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.714500 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.714854 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.714989 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.715099 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdgn\" (UniqueName: \"kubernetes.io/projected/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-kube-api-access-vwdgn\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.715504 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.715641 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8r84\" (UniqueName: \"kubernetes.io/projected/619f34f1-2021-4714-b349-e2422b306b64-kube-api-access-b8r84\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.715736 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.715928 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.716374 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.716470 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-config\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.716557 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.717208 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-config\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.716004 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.716046 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.716593 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.717680 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.719708 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.719802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.719884 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.719914 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619f34f1-2021-4714-b349-e2422b306b64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.731978 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdgn\" (UniqueName: \"kubernetes.io/projected/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-kube-api-access-vwdgn\") pod \"dnsmasq-dns-cd5cbd7b9-kp4c9\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.733268 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8r84\" (UniqueName: \"kubernetes.io/projected/619f34f1-2021-4714-b349-e2422b306b64-kube-api-access-b8r84\") pod \"nova-cell1-novncproxy-0\" (UID: \"619f34f1-2021-4714-b349-e2422b306b64\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.825833 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:32 crc kubenswrapper[4754]: I0218 19:41:32.890387 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:33 crc kubenswrapper[4754]: I0218 19:41:33.492080 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:41:33 crc kubenswrapper[4754]: W0218 19:41:33.511941 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod619f34f1_2021_4714_b349_e2422b306b64.slice/crio-d1b891f4027b765577221cd74965e604c855dd442dbf8fa505e8ccd056373eef WatchSource:0}: Error finding container d1b891f4027b765577221cd74965e604c855dd442dbf8fa505e8ccd056373eef: Status 404 returned error can't find the container with id d1b891f4027b765577221cd74965e604c855dd442dbf8fa505e8ccd056373eef Feb 18 19:41:33 crc kubenswrapper[4754]: W0218 19:41:33.608334 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6bfc7e0_35f6_4b69_bb0c_0f0077a18c96.slice/crio-50d71b86f9c3bc537018cb782c4b9053d403b87c0eb80abd91b3c9fcf49363a3 WatchSource:0}: Error finding container 50d71b86f9c3bc537018cb782c4b9053d403b87c0eb80abd91b3c9fcf49363a3: Status 404 returned error can't find the container with id 50d71b86f9c3bc537018cb782c4b9053d403b87c0eb80abd91b3c9fcf49363a3 Feb 18 19:41:33 crc kubenswrapper[4754]: I0218 19:41:33.609912 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kp4c9"] Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.180640 4754 generic.go:334] "Generic (PLEG): container finished" podID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerID="a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6" exitCode=0 Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.180701 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" event={"ID":"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96","Type":"ContainerDied","Data":"a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6"} Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.180949 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" event={"ID":"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96","Type":"ContainerStarted","Data":"50d71b86f9c3bc537018cb782c4b9053d403b87c0eb80abd91b3c9fcf49363a3"} Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.183114 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"619f34f1-2021-4714-b349-e2422b306b64","Type":"ContainerStarted","Data":"b0d6049d4a39f6b36c9e3a55e20ab46b5267f10c5b08e97113c2f0bb304e9438"} Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.183171 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"619f34f1-2021-4714-b349-e2422b306b64","Type":"ContainerStarted","Data":"d1b891f4027b765577221cd74965e604c855dd442dbf8fa505e8ccd056373eef"} Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.251297 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e730fc15-7e56-45fa-a275-aca5ca181835" path="/var/lib/kubelet/pods/e730fc15-7e56-45fa-a275-aca5ca181835/volumes" Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.264595 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.264574634 podStartE2EDuration="2.264574634s" podCreationTimestamp="2026-02-18 19:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:34.240800142 +0000 UTC m=+1396.691212938" watchObservedRunningTime="2026-02-18 19:41:34.264574634 +0000 UTC m=+1396.714987430" Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.833694 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w6fkh"] Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.838772 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.853189 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6fkh"] Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.973639 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5gt5\" (UniqueName: \"kubernetes.io/projected/0744bacb-6f20-4de8-a509-cacac3ce4baf-kube-api-access-t5gt5\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.973988 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-utilities\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:34 crc kubenswrapper[4754]: I0218 19:41:34.974037 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-catalog-content\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.032398 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.032891 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-central-agent" containerID="cri-o://d8a94db86100dad8d17a427b76a53017994bb39eba38cca08874d753f4bbe70c" gracePeriod=30 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.033355 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="proxy-httpd" containerID="cri-o://4f49ce9a0b51f0538ada6341c439e752fb0c206fc0c59e92169b5c14f0c5b95e" gracePeriod=30 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.033558 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-notification-agent" containerID="cri-o://f22ecaf0fd7cd4d815fe74705bd073b0ca6c1f6fa1d163d88cb5e7daa7f5c90e" gracePeriod=30 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.033604 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="sg-core" containerID="cri-o://4268fe83fc1d6f316b07d5a9a93f6b1a3ed5b0e299ede27385ac233ddf25f51f" gracePeriod=30 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.076624 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-utilities\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.076942 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-catalog-content\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.077329 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5gt5\" (UniqueName: \"kubernetes.io/projected/0744bacb-6f20-4de8-a509-cacac3ce4baf-kube-api-access-t5gt5\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.078175 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-utilities\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.078462 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-catalog-content\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.114239 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5gt5\" (UniqueName: \"kubernetes.io/projected/0744bacb-6f20-4de8-a509-cacac3ce4baf-kube-api-access-t5gt5\") pod \"community-operators-w6fkh\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.156930 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.218342 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" event={"ID":"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96","Type":"ContainerStarted","Data":"482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367"} Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.218619 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.223330 4754 generic.go:334] "Generic (PLEG): container finished" podID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerID="4268fe83fc1d6f316b07d5a9a93f6b1a3ed5b0e299ede27385ac233ddf25f51f" exitCode=2 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.223536 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerDied","Data":"4268fe83fc1d6f316b07d5a9a93f6b1a3ed5b0e299ede27385ac233ddf25f51f"} Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.262276 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" podStartSLOduration=3.26225381 podStartE2EDuration="3.26225381s" podCreationTimestamp="2026-02-18 19:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:35.254107779 +0000 UTC m=+1397.704520585" watchObservedRunningTime="2026-02-18 19:41:35.26225381 +0000 UTC m=+1397.712666606" Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.306417 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.307518 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-api" containerID="cri-o://85024d116096cc5596cf26fd76008083a66c9d62387fd4b45881b8460a54c707" gracePeriod=30 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.310074 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-log" containerID="cri-o://04a0522a0720f9a4c20dbc13a1475b7830bd1096227aca8dbebf3526407188de" gracePeriod=30 Feb 18 19:41:35 crc kubenswrapper[4754]: I0218 19:41:35.797845 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6fkh"] Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.238415 4754 generic.go:334] "Generic (PLEG): container finished" podID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerID="4f49ce9a0b51f0538ada6341c439e752fb0c206fc0c59e92169b5c14f0c5b95e" exitCode=0 Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.238719 4754 generic.go:334] "Generic (PLEG): container finished" podID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerID="d8a94db86100dad8d17a427b76a53017994bb39eba38cca08874d753f4bbe70c" exitCode=0 Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.238459 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerDied","Data":"4f49ce9a0b51f0538ada6341c439e752fb0c206fc0c59e92169b5c14f0c5b95e"} Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.238807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerDied","Data":"d8a94db86100dad8d17a427b76a53017994bb39eba38cca08874d753f4bbe70c"} Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.243262 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerID="04a0522a0720f9a4c20dbc13a1475b7830bd1096227aca8dbebf3526407188de" exitCode=143 Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.243343 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0e57eed-4c9b-41a5-95e9-01b18336b2c0","Type":"ContainerDied","Data":"04a0522a0720f9a4c20dbc13a1475b7830bd1096227aca8dbebf3526407188de"} Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.245662 4754 generic.go:334] "Generic (PLEG): container finished" podID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerID="f065f2b71db0a065d6688ceb25a73250844d058c8ff13619c8d1b160149f5d68" exitCode=0 Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.245703 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerDied","Data":"f065f2b71db0a065d6688ceb25a73250844d058c8ff13619c8d1b160149f5d68"} Feb 18 19:41:36 crc kubenswrapper[4754]: I0218 19:41:36.245751 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerStarted","Data":"e630b7fdff547ce1150db2d8ad00dec30da55d527e49a52583622c1d7d7ff9c0"} Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.257634 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerStarted","Data":"95d7cd47287076c428c79eb57cb658fa681da7516a782d80fb91fb6fd95d9c58"} Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.826856 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.827524 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mgf5c"] Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.829562 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.843213 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mgf5c"] Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.954324 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-catalog-content\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.954408 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-utilities\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:37 crc kubenswrapper[4754]: I0218 19:41:37.954692 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vwx\" (UniqueName: \"kubernetes.io/projected/57e445d6-0105-4d21-ac8b-812932e998a2-kube-api-access-h8vwx\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.056357 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-catalog-content\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.056431 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-utilities\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.056480 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vwx\" (UniqueName: \"kubernetes.io/projected/57e445d6-0105-4d21-ac8b-812932e998a2-kube-api-access-h8vwx\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.056916 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-catalog-content\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.057246 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-utilities\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.085287 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vwx\" (UniqueName: \"kubernetes.io/projected/57e445d6-0105-4d21-ac8b-812932e998a2-kube-api-access-h8vwx\") pod \"redhat-operators-mgf5c\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.151047 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.276854 4754 generic.go:334] "Generic (PLEG): container finished" podID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerID="95d7cd47287076c428c79eb57cb658fa681da7516a782d80fb91fb6fd95d9c58" exitCode=0 Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.276974 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerDied","Data":"95d7cd47287076c428c79eb57cb658fa681da7516a782d80fb91fb6fd95d9c58"} Feb 18 19:41:38 crc kubenswrapper[4754]: I0218 19:41:38.676874 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mgf5c"] Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.414413 4754 generic.go:334] "Generic (PLEG): container finished" podID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerID="f22ecaf0fd7cd4d815fe74705bd073b0ca6c1f6fa1d163d88cb5e7daa7f5c90e" exitCode=0 Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.414831 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerDied","Data":"f22ecaf0fd7cd4d815fe74705bd073b0ca6c1f6fa1d163d88cb5e7daa7f5c90e"} Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.431198 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.442746 4754 generic.go:334] "Generic (PLEG): container finished" podID="57e445d6-0105-4d21-ac8b-812932e998a2" containerID="116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254" exitCode=0 Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.442833 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgf5c" event={"ID":"57e445d6-0105-4d21-ac8b-812932e998a2","Type":"ContainerDied","Data":"116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254"} Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.442870 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgf5c" event={"ID":"57e445d6-0105-4d21-ac8b-812932e998a2","Type":"ContainerStarted","Data":"99a18ea8eeebe2618561c79430dead64b760ccea0a436aa86352f054d1c986d6"} Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.502807 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerStarted","Data":"f77ab668594608361308127cc5a37861bafc8f4b9ec23e747c9ad79562532e11"} Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.513332 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerID="85024d116096cc5596cf26fd76008083a66c9d62387fd4b45881b8460a54c707" exitCode=0 Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.513366 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b0e57eed-4c9b-41a5-95e9-01b18336b2c0","Type":"ContainerDied","Data":"85024d116096cc5596cf26fd76008083a66c9d62387fd4b45881b8460a54c707"} Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.513398 4754 scope.go:117] "RemoveContainer" containerID="85024d116096cc5596cf26fd76008083a66c9d62387fd4b45881b8460a54c707" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.513542 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.532986 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w6fkh" podStartSLOduration=3.095675606 podStartE2EDuration="5.532948458s" podCreationTimestamp="2026-02-18 19:41:34 +0000 UTC" firstStartedPulling="2026-02-18 19:41:36.247576075 +0000 UTC m=+1398.697988871" lastFinishedPulling="2026-02-18 19:41:38.684848927 +0000 UTC m=+1401.135261723" observedRunningTime="2026-02-18 19:41:39.529890534 +0000 UTC m=+1401.980303330" watchObservedRunningTime="2026-02-18 19:41:39.532948458 +0000 UTC m=+1401.983361244" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.561651 4754 scope.go:117] "RemoveContainer" containerID="04a0522a0720f9a4c20dbc13a1475b7830bd1096227aca8dbebf3526407188de" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.596025 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-config-data\") pod \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.596072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-combined-ca-bundle\") pod \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.596276 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-logs\") pod \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.596425 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6467l\" (UniqueName: \"kubernetes.io/projected/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-kube-api-access-6467l\") pod \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\" (UID: \"b0e57eed-4c9b-41a5-95e9-01b18336b2c0\") " Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.597127 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-logs" (OuterVolumeSpecName: "logs") pod "b0e57eed-4c9b-41a5-95e9-01b18336b2c0" (UID: "b0e57eed-4c9b-41a5-95e9-01b18336b2c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.597644 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.610210 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-kube-api-access-6467l" (OuterVolumeSpecName: "kube-api-access-6467l") pod "b0e57eed-4c9b-41a5-95e9-01b18336b2c0" (UID: "b0e57eed-4c9b-41a5-95e9-01b18336b2c0"). InnerVolumeSpecName "kube-api-access-6467l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.657883 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-config-data" (OuterVolumeSpecName: "config-data") pod "b0e57eed-4c9b-41a5-95e9-01b18336b2c0" (UID: "b0e57eed-4c9b-41a5-95e9-01b18336b2c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.672621 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0e57eed-4c9b-41a5-95e9-01b18336b2c0" (UID: "b0e57eed-4c9b-41a5-95e9-01b18336b2c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.699778 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.700096 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.700117 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6467l\" (UniqueName: \"kubernetes.io/projected/b0e57eed-4c9b-41a5-95e9-01b18336b2c0-kube-api-access-6467l\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.855209 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.875850 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.895464 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:39 crc kubenswrapper[4754]: E0218 19:41:39.896155 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-log" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.896179 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-log" Feb 18 19:41:39 crc kubenswrapper[4754]: E0218 19:41:39.896205 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-api" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.896216 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-api" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.896463 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-api" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.896490 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" containerName="nova-api-log" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.897901 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.903901 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.906631 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.907060 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:41:39 crc kubenswrapper[4754]: I0218 19:41:39.908019 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.007608 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmp8\" (UniqueName: \"kubernetes.io/projected/34605935-d859-4dd4-a21c-a9922a8099f8-kube-api-access-hhmp8\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.007712 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-config-data\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.007853 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.007888 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34605935-d859-4dd4-a21c-a9922a8099f8-logs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.007927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.007996 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-public-tls-certs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.110376 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34605935-d859-4dd4-a21c-a9922a8099f8-logs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.110442 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.110510 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-public-tls-certs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.110568 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhmp8\" (UniqueName: \"kubernetes.io/projected/34605935-d859-4dd4-a21c-a9922a8099f8-kube-api-access-hhmp8\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.110623 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-config-data\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.110720 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.111847 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34605935-d859-4dd4-a21c-a9922a8099f8-logs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.116051 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-public-tls-certs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.117532 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-config-data\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.117923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.124553 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-internal-tls-certs\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.135115 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhmp8\" (UniqueName: \"kubernetes.io/projected/34605935-d859-4dd4-a21c-a9922a8099f8-kube-api-access-hhmp8\") pod \"nova-api-0\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.204469 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.232990 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.235998 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e57eed-4c9b-41a5-95e9-01b18336b2c0" path="/var/lib/kubelet/pods/b0e57eed-4c9b-41a5-95e9-01b18336b2c0/volumes" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.314601 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-ceilometer-tls-certs\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.314994 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-run-httpd\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315056 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-log-httpd\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315156 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-config-data\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315179 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-sg-core-conf-yaml\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315346 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-combined-ca-bundle\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315401 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk7j4\" (UniqueName: \"kubernetes.io/projected/02a0bd74-4072-4689-bfdc-4ab76f0b9462-kube-api-access-hk7j4\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315424 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-scripts\") pod \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\" (UID: \"02a0bd74-4072-4689-bfdc-4ab76f0b9462\") " Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.315387 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.316695 4754 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.316781 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.319870 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-scripts" (OuterVolumeSpecName: "scripts") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.329702 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a0bd74-4072-4689-bfdc-4ab76f0b9462-kube-api-access-hk7j4" (OuterVolumeSpecName: "kube-api-access-hk7j4") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "kube-api-access-hk7j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.383541 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.385065 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.420821 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk7j4\" (UniqueName: \"kubernetes.io/projected/02a0bd74-4072-4689-bfdc-4ab76f0b9462-kube-api-access-hk7j4\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.420857 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.420867 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.420876 4754 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a0bd74-4072-4689-bfdc-4ab76f0b9462-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.420887 4754 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.459457 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-config-data" (OuterVolumeSpecName: "config-data") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.500789 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02a0bd74-4072-4689-bfdc-4ab76f0b9462" (UID: "02a0bd74-4072-4689-bfdc-4ab76f0b9462"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.523361 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.523412 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a0bd74-4072-4689-bfdc-4ab76f0b9462-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.529868 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a0bd74-4072-4689-bfdc-4ab76f0b9462","Type":"ContainerDied","Data":"b45086faae92e8a47a3f2cc726b57af275c6ec358a55d4e566f26e7b3d1fddfc"} Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.529970 4754 scope.go:117] "RemoveContainer" containerID="4f49ce9a0b51f0538ada6341c439e752fb0c206fc0c59e92169b5c14f0c5b95e" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.530195 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.566646 4754 scope.go:117] "RemoveContainer" containerID="4268fe83fc1d6f316b07d5a9a93f6b1a3ed5b0e299ede27385ac233ddf25f51f" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.598318 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.618662 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.629318 4754 scope.go:117] "RemoveContainer" containerID="f22ecaf0fd7cd4d815fe74705bd073b0ca6c1f6fa1d163d88cb5e7daa7f5c90e" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.635658 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:41:40 crc kubenswrapper[4754]: E0218 19:41:40.636636 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-central-agent" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.636680 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-central-agent" Feb 18 19:41:40 crc kubenswrapper[4754]: E0218 19:41:40.636695 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-notification-agent" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.636701 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-notification-agent" Feb 18 19:41:40 crc kubenswrapper[4754]: E0218 19:41:40.636712 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="proxy-httpd" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.636719 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="proxy-httpd" Feb 18 19:41:40 crc kubenswrapper[4754]: E0218 19:41:40.636730 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="sg-core" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.636735 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="sg-core" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.638711 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-central-agent" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.638748 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="sg-core" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.638757 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="proxy-httpd" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.638765 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" containerName="ceilometer-notification-agent" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.640932 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.644787 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.645616 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.648951 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.649657 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.673338 4754 scope.go:117] "RemoveContainer" containerID="d8a94db86100dad8d17a427b76a53017994bb39eba38cca08874d753f4bbe70c" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731036 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731099 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx762\" (UniqueName: \"kubernetes.io/projected/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-kube-api-access-kx762\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731132 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-log-httpd\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731252 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731283 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-run-httpd\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731303 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-scripts\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731454 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-config-data\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.731550 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.797019 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:40 crc kubenswrapper[4754]: W0218 19:41:40.801188 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34605935_d859_4dd4_a21c_a9922a8099f8.slice/crio-d8aca5df332d928d01a517eb1f65e853a26e744ec91dde7491f9497a4e586f54 WatchSource:0}: Error finding container d8aca5df332d928d01a517eb1f65e853a26e744ec91dde7491f9497a4e586f54: Status 404 returned error can't find the container with id d8aca5df332d928d01a517eb1f65e853a26e744ec91dde7491f9497a4e586f54 Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.834192 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.835868 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx762\" (UniqueName: \"kubernetes.io/projected/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-kube-api-access-kx762\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836065 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-log-httpd\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836459 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836602 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-log-httpd\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836613 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-run-httpd\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836701 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-scripts\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836799 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-config-data\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.836850 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.837284 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-run-httpd\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.838269 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.839849 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.841561 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.841771 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-scripts\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.842135 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-config-data\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.856199 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx762\" (UniqueName: \"kubernetes.io/projected/d3b0ecfd-1857-4e00-a48d-98824f7f0c34-kube-api-access-kx762\") pod \"ceilometer-0\" (UID: \"d3b0ecfd-1857-4e00-a48d-98824f7f0c34\") " pod="openstack/ceilometer-0" Feb 18 19:41:40 crc kubenswrapper[4754]: I0218 19:41:40.961229 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.511701 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:41:41 crc kubenswrapper[4754]: W0218 19:41:41.553406 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3b0ecfd_1857_4e00_a48d_98824f7f0c34.slice/crio-9e2193018011117fe5df9bb2cc06f7e1742f870770e51ddf0acf12e7161e2775 WatchSource:0}: Error finding container 9e2193018011117fe5df9bb2cc06f7e1742f870770e51ddf0acf12e7161e2775: Status 404 returned error can't find the container with id 9e2193018011117fe5df9bb2cc06f7e1742f870770e51ddf0acf12e7161e2775 Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.558610 4754 generic.go:334] "Generic (PLEG): container finished" podID="57e445d6-0105-4d21-ac8b-812932e998a2" containerID="3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b" exitCode=0 Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.558687 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgf5c" event={"ID":"57e445d6-0105-4d21-ac8b-812932e998a2","Type":"ContainerDied","Data":"3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b"} Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.562830 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34605935-d859-4dd4-a21c-a9922a8099f8","Type":"ContainerStarted","Data":"15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad"} Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.562865 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34605935-d859-4dd4-a21c-a9922a8099f8","Type":"ContainerStarted","Data":"50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea"} Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.562875 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34605935-d859-4dd4-a21c-a9922a8099f8","Type":"ContainerStarted","Data":"d8aca5df332d928d01a517eb1f65e853a26e744ec91dde7491f9497a4e586f54"} Feb 18 19:41:41 crc kubenswrapper[4754]: I0218 19:41:41.603057 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.603034884 podStartE2EDuration="2.603034884s" podCreationTimestamp="2026-02-18 19:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:41.597977819 +0000 UTC m=+1404.048390635" watchObservedRunningTime="2026-02-18 19:41:41.603034884 +0000 UTC m=+1404.053447680" Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.226991 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a0bd74-4072-4689-bfdc-4ab76f0b9462" path="/var/lib/kubelet/pods/02a0bd74-4072-4689-bfdc-4ab76f0b9462/volumes" Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.576198 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3b0ecfd-1857-4e00-a48d-98824f7f0c34","Type":"ContainerStarted","Data":"9e2193018011117fe5df9bb2cc06f7e1742f870770e51ddf0acf12e7161e2775"} Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.826982 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.863174 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.892377 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.967624 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-l2m2z"] Feb 18 19:41:42 crc kubenswrapper[4754]: I0218 19:41:42.967976 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerName="dnsmasq-dns" containerID="cri-o://71ce765678e395cfe5d0ee0509575d9d70938b963f44ab589e102fba2eb13a0e" gracePeriod=10 Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.586610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3b0ecfd-1857-4e00-a48d-98824f7f0c34","Type":"ContainerStarted","Data":"cc1aca30b9285c24a921f21815fef43c2e8b51a65a15fe9d192d99c94c45e56d"} Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.586659 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3b0ecfd-1857-4e00-a48d-98824f7f0c34","Type":"ContainerStarted","Data":"17dccb36f0880e6838ac334230e2088c6f8793546019b0b9096ee3c751680e98"} Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.589013 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgf5c" event={"ID":"57e445d6-0105-4d21-ac8b-812932e998a2","Type":"ContainerStarted","Data":"d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a"} Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.590447 4754 generic.go:334] "Generic (PLEG): container finished" podID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerID="71ce765678e395cfe5d0ee0509575d9d70938b963f44ab589e102fba2eb13a0e" exitCode=0 Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.590523 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" event={"ID":"7cdd5974-a24b-43a9-80d7-ad4c982aacb0","Type":"ContainerDied","Data":"71ce765678e395cfe5d0ee0509575d9d70938b963f44ab589e102fba2eb13a0e"} Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.606520 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:41:43 crc kubenswrapper[4754]: I0218 19:41:43.615931 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mgf5c" podStartSLOduration=3.617488833 podStartE2EDuration="6.615904445s" podCreationTimestamp="2026-02-18 19:41:37 +0000 UTC" firstStartedPulling="2026-02-18 19:41:39.468908533 +0000 UTC m=+1401.919321329" lastFinishedPulling="2026-02-18 19:41:42.467324145 +0000 UTC m=+1404.917736941" observedRunningTime="2026-02-18 19:41:43.613749109 +0000 UTC m=+1406.064161915" watchObservedRunningTime="2026-02-18 19:41:43.615904445 +0000 UTC m=+1406.066317241" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.001719 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-x4p8j"] Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.004023 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.007945 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.007945 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.024414 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x4p8j"] Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.091876 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.148932 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-config-data\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.149002 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.149031 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhks2\" (UniqueName: \"kubernetes.io/projected/374cff81-352f-49da-b29e-db1c118cfd37-kube-api-access-nhks2\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.149060 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-scripts\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.250853 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-sb\") pod \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.250981 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-svc\") pod \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.251090 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-config\") pod \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.251594 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-swift-storage-0\") pod \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.251655 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69lj4\" (UniqueName: \"kubernetes.io/projected/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-kube-api-access-69lj4\") pod \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.251701 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-nb\") pod \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\" (UID: \"7cdd5974-a24b-43a9-80d7-ad4c982aacb0\") " Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.252124 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-config-data\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.252256 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.252431 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhks2\" (UniqueName: \"kubernetes.io/projected/374cff81-352f-49da-b29e-db1c118cfd37-kube-api-access-nhks2\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.252801 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-scripts\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.255954 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-kube-api-access-69lj4" (OuterVolumeSpecName: "kube-api-access-69lj4") pod "7cdd5974-a24b-43a9-80d7-ad4c982aacb0" (UID: "7cdd5974-a24b-43a9-80d7-ad4c982aacb0"). InnerVolumeSpecName "kube-api-access-69lj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.256434 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-config-data\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.257936 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.267035 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-scripts\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.269947 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhks2\" (UniqueName: \"kubernetes.io/projected/374cff81-352f-49da-b29e-db1c118cfd37-kube-api-access-nhks2\") pod \"nova-cell1-cell-mapping-x4p8j\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.307456 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7cdd5974-a24b-43a9-80d7-ad4c982aacb0" (UID: "7cdd5974-a24b-43a9-80d7-ad4c982aacb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.314183 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-config" (OuterVolumeSpecName: "config") pod "7cdd5974-a24b-43a9-80d7-ad4c982aacb0" (UID: "7cdd5974-a24b-43a9-80d7-ad4c982aacb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.319414 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7cdd5974-a24b-43a9-80d7-ad4c982aacb0" (UID: "7cdd5974-a24b-43a9-80d7-ad4c982aacb0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.321852 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.325619 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7cdd5974-a24b-43a9-80d7-ad4c982aacb0" (UID: "7cdd5974-a24b-43a9-80d7-ad4c982aacb0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.341517 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7cdd5974-a24b-43a9-80d7-ad4c982aacb0" (UID: "7cdd5974-a24b-43a9-80d7-ad4c982aacb0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.354871 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.354897 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.354908 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69lj4\" (UniqueName: \"kubernetes.io/projected/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-kube-api-access-69lj4\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.354916 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.354925 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.354934 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7cdd5974-a24b-43a9-80d7-ad4c982aacb0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.606966 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3b0ecfd-1857-4e00-a48d-98824f7f0c34","Type":"ContainerStarted","Data":"f679c1301845e90dc9b2d7f869092a139ba824bf8430479be9cfa549bfd67743"} Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.611332 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.614680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-l2m2z" event={"ID":"7cdd5974-a24b-43a9-80d7-ad4c982aacb0","Type":"ContainerDied","Data":"6b9760c7e8a1d8ad8b755d601e9041d6c7262fff9a5efde3b8534343b42e9f16"} Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.614731 4754 scope.go:117] "RemoveContainer" containerID="71ce765678e395cfe5d0ee0509575d9d70938b963f44ab589e102fba2eb13a0e" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.660534 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-l2m2z"] Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.664908 4754 scope.go:117] "RemoveContainer" containerID="c630ba63f36df94031a07d3b3d409002b844b4049ba59b1e225535f7ca652771" Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.681231 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-l2m2z"] Feb 18 19:41:44 crc kubenswrapper[4754]: I0218 19:41:44.902898 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x4p8j"] Feb 18 19:41:44 crc kubenswrapper[4754]: W0218 19:41:44.905739 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod374cff81_352f_49da_b29e_db1c118cfd37.slice/crio-91aafa46a3859ec844b24a8d2c1d17445a501ba1fee7f776a7e9c303940d1910 WatchSource:0}: Error finding container 91aafa46a3859ec844b24a8d2c1d17445a501ba1fee7f776a7e9c303940d1910: Status 404 returned error can't find the container with id 91aafa46a3859ec844b24a8d2c1d17445a501ba1fee7f776a7e9c303940d1910 Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.158528 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.158804 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.236854 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.625776 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x4p8j" event={"ID":"374cff81-352f-49da-b29e-db1c118cfd37","Type":"ContainerStarted","Data":"d48f6023a01e527e47d74bf99a9f03ad6d217b681d56a24df745f363f38fe2e4"} Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.626089 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x4p8j" event={"ID":"374cff81-352f-49da-b29e-db1c118cfd37","Type":"ContainerStarted","Data":"91aafa46a3859ec844b24a8d2c1d17445a501ba1fee7f776a7e9c303940d1910"} Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.650079 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-x4p8j" podStartSLOduration=2.650050422 podStartE2EDuration="2.650050422s" podCreationTimestamp="2026-02-18 19:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:45.646482531 +0000 UTC m=+1408.096895327" watchObservedRunningTime="2026-02-18 19:41:45.650050422 +0000 UTC m=+1408.100463218" Feb 18 19:41:45 crc kubenswrapper[4754]: I0218 19:41:45.691057 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:46 crc kubenswrapper[4754]: I0218 19:41:46.225049 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" path="/var/lib/kubelet/pods/7cdd5974-a24b-43a9-80d7-ad4c982aacb0/volumes" Feb 18 19:41:46 crc kubenswrapper[4754]: I0218 19:41:46.701284 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d3b0ecfd-1857-4e00-a48d-98824f7f0c34","Type":"ContainerStarted","Data":"861ff574e90e499acde300e397bf5a7b4c0c70e5bd7b5898442ef087001fc84d"} Feb 18 19:41:46 crc kubenswrapper[4754]: I0218 19:41:46.701792 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:41:46 crc kubenswrapper[4754]: I0218 19:41:46.732612 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.383271293 podStartE2EDuration="6.732584035s" podCreationTimestamp="2026-02-18 19:41:40 +0000 UTC" firstStartedPulling="2026-02-18 19:41:41.555193339 +0000 UTC m=+1404.005606135" lastFinishedPulling="2026-02-18 19:41:45.904506081 +0000 UTC m=+1408.354918877" observedRunningTime="2026-02-18 19:41:46.722740621 +0000 UTC m=+1409.173153417" watchObservedRunningTime="2026-02-18 19:41:46.732584035 +0000 UTC m=+1409.182996831" Feb 18 19:41:48 crc kubenswrapper[4754]: I0218 19:41:48.152057 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:48 crc kubenswrapper[4754]: I0218 19:41:48.152405 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:49 crc kubenswrapper[4754]: I0218 19:41:49.218737 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mgf5c" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="registry-server" probeResult="failure" output=< Feb 18 19:41:49 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:41:49 crc kubenswrapper[4754]: > Feb 18 19:41:49 crc kubenswrapper[4754]: I0218 19:41:49.423173 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6fkh"] Feb 18 19:41:49 crc kubenswrapper[4754]: I0218 19:41:49.423829 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w6fkh" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="registry-server" containerID="cri-o://f77ab668594608361308127cc5a37861bafc8f4b9ec23e747c9ad79562532e11" gracePeriod=2 Feb 18 19:41:49 crc kubenswrapper[4754]: I0218 19:41:49.733299 4754 generic.go:334] "Generic (PLEG): container finished" podID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerID="f77ab668594608361308127cc5a37861bafc8f4b9ec23e747c9ad79562532e11" exitCode=0 Feb 18 19:41:49 crc kubenswrapper[4754]: I0218 19:41:49.733351 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerDied","Data":"f77ab668594608361308127cc5a37861bafc8f4b9ec23e747c9ad79562532e11"} Feb 18 19:41:49 crc kubenswrapper[4754]: I0218 19:41:49.948645 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.088420 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5gt5\" (UniqueName: \"kubernetes.io/projected/0744bacb-6f20-4de8-a509-cacac3ce4baf-kube-api-access-t5gt5\") pod \"0744bacb-6f20-4de8-a509-cacac3ce4baf\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.088535 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-utilities\") pod \"0744bacb-6f20-4de8-a509-cacac3ce4baf\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.088694 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-catalog-content\") pod \"0744bacb-6f20-4de8-a509-cacac3ce4baf\" (UID: \"0744bacb-6f20-4de8-a509-cacac3ce4baf\") " Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.089339 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-utilities" (OuterVolumeSpecName: "utilities") pod "0744bacb-6f20-4de8-a509-cacac3ce4baf" (UID: "0744bacb-6f20-4de8-a509-cacac3ce4baf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.101347 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0744bacb-6f20-4de8-a509-cacac3ce4baf-kube-api-access-t5gt5" (OuterVolumeSpecName: "kube-api-access-t5gt5") pod "0744bacb-6f20-4de8-a509-cacac3ce4baf" (UID: "0744bacb-6f20-4de8-a509-cacac3ce4baf"). InnerVolumeSpecName "kube-api-access-t5gt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.140605 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0744bacb-6f20-4de8-a509-cacac3ce4baf" (UID: "0744bacb-6f20-4de8-a509-cacac3ce4baf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.191988 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5gt5\" (UniqueName: \"kubernetes.io/projected/0744bacb-6f20-4de8-a509-cacac3ce4baf-kube-api-access-t5gt5\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.192026 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.192041 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0744bacb-6f20-4de8-a509-cacac3ce4baf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.234058 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.237196 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.759903 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6fkh" event={"ID":"0744bacb-6f20-4de8-a509-cacac3ce4baf","Type":"ContainerDied","Data":"e630b7fdff547ce1150db2d8ad00dec30da55d527e49a52583622c1d7d7ff9c0"} Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.759949 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6fkh" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.760292 4754 scope.go:117] "RemoveContainer" containerID="f77ab668594608361308127cc5a37861bafc8f4b9ec23e747c9ad79562532e11" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.763984 4754 generic.go:334] "Generic (PLEG): container finished" podID="374cff81-352f-49da-b29e-db1c118cfd37" containerID="d48f6023a01e527e47d74bf99a9f03ad6d217b681d56a24df745f363f38fe2e4" exitCode=0 Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.764086 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x4p8j" event={"ID":"374cff81-352f-49da-b29e-db1c118cfd37","Type":"ContainerDied","Data":"d48f6023a01e527e47d74bf99a9f03ad6d217b681d56a24df745f363f38fe2e4"} Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.810672 4754 scope.go:117] "RemoveContainer" containerID="95d7cd47287076c428c79eb57cb658fa681da7516a782d80fb91fb6fd95d9c58" Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.811845 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6fkh"] Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.824434 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w6fkh"] Feb 18 19:41:50 crc kubenswrapper[4754]: I0218 19:41:50.839329 4754 scope.go:117] "RemoveContainer" containerID="f065f2b71db0a065d6688ceb25a73250844d058c8ff13619c8d1b160149f5d68" Feb 18 19:41:51 crc kubenswrapper[4754]: I0218 19:41:51.251326 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:51 crc kubenswrapper[4754]: I0218 19:41:51.251378 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.196524 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.248716 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" path="/var/lib/kubelet/pods/0744bacb-6f20-4de8-a509-cacac3ce4baf/volumes" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.338887 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-scripts\") pod \"374cff81-352f-49da-b29e-db1c118cfd37\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.340205 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhks2\" (UniqueName: \"kubernetes.io/projected/374cff81-352f-49da-b29e-db1c118cfd37-kube-api-access-nhks2\") pod \"374cff81-352f-49da-b29e-db1c118cfd37\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.340346 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-config-data\") pod \"374cff81-352f-49da-b29e-db1c118cfd37\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.340685 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-combined-ca-bundle\") pod \"374cff81-352f-49da-b29e-db1c118cfd37\" (UID: \"374cff81-352f-49da-b29e-db1c118cfd37\") " Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.344887 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-scripts" (OuterVolumeSpecName: "scripts") pod "374cff81-352f-49da-b29e-db1c118cfd37" (UID: "374cff81-352f-49da-b29e-db1c118cfd37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.345657 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374cff81-352f-49da-b29e-db1c118cfd37-kube-api-access-nhks2" (OuterVolumeSpecName: "kube-api-access-nhks2") pod "374cff81-352f-49da-b29e-db1c118cfd37" (UID: "374cff81-352f-49da-b29e-db1c118cfd37"). InnerVolumeSpecName "kube-api-access-nhks2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.370231 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "374cff81-352f-49da-b29e-db1c118cfd37" (UID: "374cff81-352f-49da-b29e-db1c118cfd37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.375845 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-config-data" (OuterVolumeSpecName: "config-data") pod "374cff81-352f-49da-b29e-db1c118cfd37" (UID: "374cff81-352f-49da-b29e-db1c118cfd37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.443022 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.443055 4754 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.443098 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhks2\" (UniqueName: \"kubernetes.io/projected/374cff81-352f-49da-b29e-db1c118cfd37-kube-api-access-nhks2\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.443109 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374cff81-352f-49da-b29e-db1c118cfd37-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.805179 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x4p8j" event={"ID":"374cff81-352f-49da-b29e-db1c118cfd37","Type":"ContainerDied","Data":"91aafa46a3859ec844b24a8d2c1d17445a501ba1fee7f776a7e9c303940d1910"} Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.805238 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91aafa46a3859ec844b24a8d2c1d17445a501ba1fee7f776a7e9c303940d1910" Feb 18 19:41:52 crc kubenswrapper[4754]: I0218 19:41:52.805241 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x4p8j" Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.011899 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.012239 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-log" containerID="cri-o://50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea" gracePeriod=30 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.012403 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-api" containerID="cri-o://15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad" gracePeriod=30 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.034984 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.035230 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="40656c76-f405-4d8a-8b29-384ccae5068b" containerName="nova-scheduler-scheduler" containerID="cri-o://4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" gracePeriod=30 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.045137 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.045436 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-log" containerID="cri-o://9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5" gracePeriod=30 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.045939 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-metadata" containerID="cri-o://4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1" gracePeriod=30 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.817891 4754 generic.go:334] "Generic (PLEG): container finished" podID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerID="9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5" exitCode=143 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.817963 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"254a62e3-ae00-4da7-8b56-8ed9f6580e17","Type":"ContainerDied","Data":"9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5"} Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.821395 4754 generic.go:334] "Generic (PLEG): container finished" podID="34605935-d859-4dd4-a21c-a9922a8099f8" containerID="50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea" exitCode=143 Feb 18 19:41:53 crc kubenswrapper[4754]: I0218 19:41:53.821431 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34605935-d859-4dd4-a21c-a9922a8099f8","Type":"ContainerDied","Data":"50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea"} Feb 18 19:41:54 crc kubenswrapper[4754]: E0218 19:41:54.301831 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:41:54 crc kubenswrapper[4754]: E0218 19:41:54.303810 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:41:54 crc kubenswrapper[4754]: E0218 19:41:54.305776 4754 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:41:54 crc kubenswrapper[4754]: E0218 19:41:54.305812 4754 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="40656c76-f405-4d8a-8b29-384ccae5068b" containerName="nova-scheduler-scheduler" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.201856 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:38794->10.217.0.216:8775: read: connection reset by peer" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.202005 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:38796->10.217.0.216:8775: read: connection reset by peer" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.672191 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.736609 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bszvl\" (UniqueName: \"kubernetes.io/projected/254a62e3-ae00-4da7-8b56-8ed9f6580e17-kube-api-access-bszvl\") pod \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.736693 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-nova-metadata-tls-certs\") pod \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.736852 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/254a62e3-ae00-4da7-8b56-8ed9f6580e17-logs\") pod \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.736919 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-config-data\") pod \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.736966 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-combined-ca-bundle\") pod \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\" (UID: \"254a62e3-ae00-4da7-8b56-8ed9f6580e17\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.738514 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/254a62e3-ae00-4da7-8b56-8ed9f6580e17-logs" (OuterVolumeSpecName: "logs") pod "254a62e3-ae00-4da7-8b56-8ed9f6580e17" (UID: "254a62e3-ae00-4da7-8b56-8ed9f6580e17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.747550 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254a62e3-ae00-4da7-8b56-8ed9f6580e17-kube-api-access-bszvl" (OuterVolumeSpecName: "kube-api-access-bszvl") pod "254a62e3-ae00-4da7-8b56-8ed9f6580e17" (UID: "254a62e3-ae00-4da7-8b56-8ed9f6580e17"). InnerVolumeSpecName "kube-api-access-bszvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.775441 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-config-data" (OuterVolumeSpecName: "config-data") pod "254a62e3-ae00-4da7-8b56-8ed9f6580e17" (UID: "254a62e3-ae00-4da7-8b56-8ed9f6580e17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.796600 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "254a62e3-ae00-4da7-8b56-8ed9f6580e17" (UID: "254a62e3-ae00-4da7-8b56-8ed9f6580e17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.824434 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.834356 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "254a62e3-ae00-4da7-8b56-8ed9f6580e17" (UID: "254a62e3-ae00-4da7-8b56-8ed9f6580e17"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839093 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-internal-tls-certs\") pod \"34605935-d859-4dd4-a21c-a9922a8099f8\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839210 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhmp8\" (UniqueName: \"kubernetes.io/projected/34605935-d859-4dd4-a21c-a9922a8099f8-kube-api-access-hhmp8\") pod \"34605935-d859-4dd4-a21c-a9922a8099f8\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839250 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-public-tls-certs\") pod \"34605935-d859-4dd4-a21c-a9922a8099f8\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839284 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34605935-d859-4dd4-a21c-a9922a8099f8-logs\") pod \"34605935-d859-4dd4-a21c-a9922a8099f8\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839338 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-config-data\") pod \"34605935-d859-4dd4-a21c-a9922a8099f8\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839356 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-combined-ca-bundle\") pod \"34605935-d859-4dd4-a21c-a9922a8099f8\" (UID: \"34605935-d859-4dd4-a21c-a9922a8099f8\") " Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839795 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/254a62e3-ae00-4da7-8b56-8ed9f6580e17-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839809 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839817 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839829 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bszvl\" (UniqueName: \"kubernetes.io/projected/254a62e3-ae00-4da7-8b56-8ed9f6580e17-kube-api-access-bszvl\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.839838 4754 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/254a62e3-ae00-4da7-8b56-8ed9f6580e17-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.842130 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34605935-d859-4dd4-a21c-a9922a8099f8-logs" (OuterVolumeSpecName: "logs") pod "34605935-d859-4dd4-a21c-a9922a8099f8" (UID: "34605935-d859-4dd4-a21c-a9922a8099f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.848808 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34605935-d859-4dd4-a21c-a9922a8099f8-kube-api-access-hhmp8" (OuterVolumeSpecName: "kube-api-access-hhmp8") pod "34605935-d859-4dd4-a21c-a9922a8099f8" (UID: "34605935-d859-4dd4-a21c-a9922a8099f8"). InnerVolumeSpecName "kube-api-access-hhmp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.883377 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34605935-d859-4dd4-a21c-a9922a8099f8" (UID: "34605935-d859-4dd4-a21c-a9922a8099f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.889365 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-config-data" (OuterVolumeSpecName: "config-data") pod "34605935-d859-4dd4-a21c-a9922a8099f8" (UID: "34605935-d859-4dd4-a21c-a9922a8099f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.889554 4754 generic.go:334] "Generic (PLEG): container finished" podID="34605935-d859-4dd4-a21c-a9922a8099f8" containerID="15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad" exitCode=0 Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.889627 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34605935-d859-4dd4-a21c-a9922a8099f8","Type":"ContainerDied","Data":"15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad"} Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.889662 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"34605935-d859-4dd4-a21c-a9922a8099f8","Type":"ContainerDied","Data":"d8aca5df332d928d01a517eb1f65e853a26e744ec91dde7491f9497a4e586f54"} Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.889683 4754 scope.go:117] "RemoveContainer" containerID="15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.889788 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.892689 4754 generic.go:334] "Generic (PLEG): container finished" podID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerID="4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1" exitCode=0 Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.892722 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"254a62e3-ae00-4da7-8b56-8ed9f6580e17","Type":"ContainerDied","Data":"4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1"} Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.892742 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"254a62e3-ae00-4da7-8b56-8ed9f6580e17","Type":"ContainerDied","Data":"2fb8c9a74a5dcbca898f541fb4839003aff3c6bb1aafc94987ee7c64d75b7a44"} Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.892798 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.912655 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "34605935-d859-4dd4-a21c-a9922a8099f8" (UID: "34605935-d859-4dd4-a21c-a9922a8099f8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.927770 4754 scope.go:117] "RemoveContainer" containerID="50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.929053 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "34605935-d859-4dd4-a21c-a9922a8099f8" (UID: "34605935-d859-4dd4-a21c-a9922a8099f8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.931199 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.942420 4754 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.942467 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhmp8\" (UniqueName: \"kubernetes.io/projected/34605935-d859-4dd4-a21c-a9922a8099f8-kube-api-access-hhmp8\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.942480 4754 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.942489 4754 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34605935-d859-4dd4-a21c-a9922a8099f8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.942500 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.942508 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34605935-d859-4dd4-a21c-a9922a8099f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.943911 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.962817 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963348 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-log" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963367 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-log" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963390 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="extract-content" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963398 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="extract-content" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963417 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-metadata" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963425 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-metadata" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963440 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374cff81-352f-49da-b29e-db1c118cfd37" containerName="nova-manage" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963447 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="374cff81-352f-49da-b29e-db1c118cfd37" containerName="nova-manage" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963461 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerName="init" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963468 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerName="init" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963485 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerName="dnsmasq-dns" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963490 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerName="dnsmasq-dns" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963498 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-log" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963504 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-log" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963527 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="extract-utilities" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963533 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="extract-utilities" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963547 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-api" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963552 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-api" Feb 18 19:41:56 crc kubenswrapper[4754]: E0218 19:41:56.963565 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="registry-server" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963571 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="registry-server" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963828 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-metadata" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963854 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdd5974-a24b-43a9-80d7-ad4c982aacb0" containerName="dnsmasq-dns" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963863 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-log" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963875 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" containerName="nova-metadata-log" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963887 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" containerName="nova-api-api" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963897 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0744bacb-6f20-4de8-a509-cacac3ce4baf" containerName="registry-server" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.963906 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="374cff81-352f-49da-b29e-db1c118cfd37" containerName="nova-manage" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.965045 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.968825 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.969060 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:41:56 crc kubenswrapper[4754]: I0218 19:41:56.986956 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.005342 4754 scope.go:117] "RemoveContainer" containerID="15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad" Feb 18 19:41:57 crc kubenswrapper[4754]: E0218 19:41:57.006293 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad\": container with ID starting with 15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad not found: ID does not exist" containerID="15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.006332 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad"} err="failed to get container status \"15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad\": rpc error: code = NotFound desc = could not find container \"15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad\": container with ID starting with 15efe080f4fa59ecf7f05ee05216ee67714949f48f9389e008d4babe98c4e3ad not found: ID does not exist" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.006361 4754 scope.go:117] "RemoveContainer" containerID="50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea" Feb 18 19:41:57 crc kubenswrapper[4754]: E0218 19:41:57.006696 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea\": container with ID starting with 50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea not found: ID does not exist" containerID="50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.006725 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea"} err="failed to get container status \"50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea\": rpc error: code = NotFound desc = could not find container \"50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea\": container with ID starting with 50c66d9f6311868da6ff71915b0a4e4e34961562182af975f050599846ddd8ea not found: ID does not exist" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.006748 4754 scope.go:117] "RemoveContainer" containerID="4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.035292 4754 scope.go:117] "RemoveContainer" containerID="9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.055639 4754 scope.go:117] "RemoveContainer" containerID="4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1" Feb 18 19:41:57 crc kubenswrapper[4754]: E0218 19:41:57.056192 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1\": container with ID starting with 4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1 not found: ID does not exist" containerID="4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.056240 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1"} err="failed to get container status \"4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1\": rpc error: code = NotFound desc = could not find container \"4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1\": container with ID starting with 4e69b7f8cecc43ad2c87272ea87230a38a8059d14a23aabe01be60297d1306f1 not found: ID does not exist" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.056270 4754 scope.go:117] "RemoveContainer" containerID="9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5" Feb 18 19:41:57 crc kubenswrapper[4754]: E0218 19:41:57.056626 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5\": container with ID starting with 9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5 not found: ID does not exist" containerID="9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.056661 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5"} err="failed to get container status \"9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5\": rpc error: code = NotFound desc = could not find container \"9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5\": container with ID starting with 9b46709400a1e4360c4212962ed64141c7b25893a064c40fc9374249088977c5 not found: ID does not exist" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.146645 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.146687 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4t5\" (UniqueName: \"kubernetes.io/projected/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-kube-api-access-ww4t5\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.146724 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-logs\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.146821 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-config-data\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.146881 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.229267 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.239390 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.249561 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.249817 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4t5\" (UniqueName: \"kubernetes.io/projected/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-kube-api-access-ww4t5\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.249973 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-logs\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.250277 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-config-data\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.250432 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-logs\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.250890 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.255468 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.257002 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.256439 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-config-data\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.273943 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.277071 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.277436 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4t5\" (UniqueName: \"kubernetes.io/projected/85f34d7d-86e9-4a79-8b5b-59a6e36cf2be-kube-api-access-ww4t5\") pod \"nova-metadata-0\" (UID: \"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be\") " pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.279892 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.279983 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.280268 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.294133 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.295782 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.456684 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc56d60b-cef7-41a2-a20a-ee0464768793-logs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.456738 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llflt\" (UniqueName: \"kubernetes.io/projected/fc56d60b-cef7-41a2-a20a-ee0464768793-kube-api-access-llflt\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.456790 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.456820 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.456860 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.456899 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-config-data\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.558967 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc56d60b-cef7-41a2-a20a-ee0464768793-logs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.559297 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llflt\" (UniqueName: \"kubernetes.io/projected/fc56d60b-cef7-41a2-a20a-ee0464768793-kube-api-access-llflt\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.559351 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.559379 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.559423 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.559463 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-config-data\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.559846 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc56d60b-cef7-41a2-a20a-ee0464768793-logs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.566189 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.566842 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-config-data\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.570085 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-public-tls-certs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.579216 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc56d60b-cef7-41a2-a20a-ee0464768793-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.586428 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llflt\" (UniqueName: \"kubernetes.io/projected/fc56d60b-cef7-41a2-a20a-ee0464768793-kube-api-access-llflt\") pod \"nova-api-0\" (UID: \"fc56d60b-cef7-41a2-a20a-ee0464768793\") " pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.719259 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.805809 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:41:57 crc kubenswrapper[4754]: I0218 19:41:57.908763 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be","Type":"ContainerStarted","Data":"dd51035b96f19416e877cebd478e29d7333250ef3c7ad72e23b70a0a61d0bfc2"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.157569 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:41:58 crc kubenswrapper[4754]: W0218 19:41:58.167385 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc56d60b_cef7_41a2_a20a_ee0464768793.slice/crio-cdb947a3ce7e5c02427116f2afb2140be0ed864820a354278f62d78bf419996a WatchSource:0}: Error finding container cdb947a3ce7e5c02427116f2afb2140be0ed864820a354278f62d78bf419996a: Status 404 returned error can't find the container with id cdb947a3ce7e5c02427116f2afb2140be0ed864820a354278f62d78bf419996a Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.204331 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.237073 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254a62e3-ae00-4da7-8b56-8ed9f6580e17" path="/var/lib/kubelet/pods/254a62e3-ae00-4da7-8b56-8ed9f6580e17/volumes" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.239588 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34605935-d859-4dd4-a21c-a9922a8099f8" path="/var/lib/kubelet/pods/34605935-d859-4dd4-a21c-a9922a8099f8/volumes" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.273996 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.454451 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mgf5c"] Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.481094 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.576950 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-config-data\") pod \"40656c76-f405-4d8a-8b29-384ccae5068b\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.577020 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxvkb\" (UniqueName: \"kubernetes.io/projected/40656c76-f405-4d8a-8b29-384ccae5068b-kube-api-access-bxvkb\") pod \"40656c76-f405-4d8a-8b29-384ccae5068b\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.577056 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-combined-ca-bundle\") pod \"40656c76-f405-4d8a-8b29-384ccae5068b\" (UID: \"40656c76-f405-4d8a-8b29-384ccae5068b\") " Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.583048 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40656c76-f405-4d8a-8b29-384ccae5068b-kube-api-access-bxvkb" (OuterVolumeSpecName: "kube-api-access-bxvkb") pod "40656c76-f405-4d8a-8b29-384ccae5068b" (UID: "40656c76-f405-4d8a-8b29-384ccae5068b"). InnerVolumeSpecName "kube-api-access-bxvkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.619072 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-config-data" (OuterVolumeSpecName: "config-data") pod "40656c76-f405-4d8a-8b29-384ccae5068b" (UID: "40656c76-f405-4d8a-8b29-384ccae5068b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.619551 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40656c76-f405-4d8a-8b29-384ccae5068b" (UID: "40656c76-f405-4d8a-8b29-384ccae5068b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.679810 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.680193 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxvkb\" (UniqueName: \"kubernetes.io/projected/40656c76-f405-4d8a-8b29-384ccae5068b-kube-api-access-bxvkb\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.680212 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40656c76-f405-4d8a-8b29-384ccae5068b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.922419 4754 generic.go:334] "Generic (PLEG): container finished" podID="40656c76-f405-4d8a-8b29-384ccae5068b" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" exitCode=0 Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.922526 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40656c76-f405-4d8a-8b29-384ccae5068b","Type":"ContainerDied","Data":"4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.922560 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.922584 4754 scope.go:117] "RemoveContainer" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.922567 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40656c76-f405-4d8a-8b29-384ccae5068b","Type":"ContainerDied","Data":"49e369ee208bdf218c4242cabbfb6f3efac7a9ae75f23d3f696ccffcc0728fa9"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.925725 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be","Type":"ContainerStarted","Data":"0e9a8b7a7f8d1c1bad71a5ff958ebd9e90567e641b8d47e036ff01975de6d248"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.925769 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"85f34d7d-86e9-4a79-8b5b-59a6e36cf2be","Type":"ContainerStarted","Data":"4957b2fd20f1a81f8698b03420f07a77c0fc63ae05cf6a9125bf43788ab72a98"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.932940 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc56d60b-cef7-41a2-a20a-ee0464768793","Type":"ContainerStarted","Data":"d4f873069601178c4500cce2258df1cce230662a01996f100f701156066d404c"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.933175 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc56d60b-cef7-41a2-a20a-ee0464768793","Type":"ContainerStarted","Data":"36185e629bd895eed46a8d4ffa91d41f18f3a6790d2289f53ca47db4b78c2fee"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.933244 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc56d60b-cef7-41a2-a20a-ee0464768793","Type":"ContainerStarted","Data":"cdb947a3ce7e5c02427116f2afb2140be0ed864820a354278f62d78bf419996a"} Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.954768 4754 scope.go:117] "RemoveContainer" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" Feb 18 19:41:58 crc kubenswrapper[4754]: E0218 19:41:58.956694 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783\": container with ID starting with 4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783 not found: ID does not exist" containerID="4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.956865 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783"} err="failed to get container status \"4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783\": rpc error: code = NotFound desc = could not find container \"4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783\": container with ID starting with 4aabcb2e001cced419478763a85e85aca5ae6e8aa0b7fb0301fa93993e1bc783 not found: ID does not exist" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.961135 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.961116358 podStartE2EDuration="2.961116358s" podCreationTimestamp="2026-02-18 19:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:58.951126141 +0000 UTC m=+1421.401538937" watchObservedRunningTime="2026-02-18 19:41:58.961116358 +0000 UTC m=+1421.411529154" Feb 18 19:41:58 crc kubenswrapper[4754]: I0218 19:41:58.987578 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.987551724 podStartE2EDuration="1.987551724s" podCreationTimestamp="2026-02-18 19:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:41:58.976206494 +0000 UTC m=+1421.426619310" watchObservedRunningTime="2026-02-18 19:41:58.987551724 +0000 UTC m=+1421.437964520" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.003511 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.019249 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.031530 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:59 crc kubenswrapper[4754]: E0218 19:41:59.032125 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40656c76-f405-4d8a-8b29-384ccae5068b" containerName="nova-scheduler-scheduler" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.032166 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="40656c76-f405-4d8a-8b29-384ccae5068b" containerName="nova-scheduler-scheduler" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.032453 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="40656c76-f405-4d8a-8b29-384ccae5068b" containerName="nova-scheduler-scheduler" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.034487 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.041685 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.059368 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.200055 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8ee982-1032-4f39-a0ca-6cf469230730-config-data\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.200105 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpzpg\" (UniqueName: \"kubernetes.io/projected/ff8ee982-1032-4f39-a0ca-6cf469230730-kube-api-access-zpzpg\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.200784 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8ee982-1032-4f39-a0ca-6cf469230730-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.303080 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8ee982-1032-4f39-a0ca-6cf469230730-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.303188 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8ee982-1032-4f39-a0ca-6cf469230730-config-data\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.303220 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpzpg\" (UniqueName: \"kubernetes.io/projected/ff8ee982-1032-4f39-a0ca-6cf469230730-kube-api-access-zpzpg\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.307058 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff8ee982-1032-4f39-a0ca-6cf469230730-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.307259 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff8ee982-1032-4f39-a0ca-6cf469230730-config-data\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.326744 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpzpg\" (UniqueName: \"kubernetes.io/projected/ff8ee982-1032-4f39-a0ca-6cf469230730-kube-api-access-zpzpg\") pod \"nova-scheduler-0\" (UID: \"ff8ee982-1032-4f39-a0ca-6cf469230730\") " pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.362185 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.820786 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:41:59 crc kubenswrapper[4754]: W0218 19:41:59.834299 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff8ee982_1032_4f39_a0ca_6cf469230730.slice/crio-4af9dddcb127399fc87c24b19d37726d44768cc93456a471160c86c00a411073 WatchSource:0}: Error finding container 4af9dddcb127399fc87c24b19d37726d44768cc93456a471160c86c00a411073: Status 404 returned error can't find the container with id 4af9dddcb127399fc87c24b19d37726d44768cc93456a471160c86c00a411073 Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.945912 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mgf5c" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="registry-server" containerID="cri-o://d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a" gracePeriod=2 Feb 18 19:41:59 crc kubenswrapper[4754]: I0218 19:41:59.946538 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ff8ee982-1032-4f39-a0ca-6cf469230730","Type":"ContainerStarted","Data":"4af9dddcb127399fc87c24b19d37726d44768cc93456a471160c86c00a411073"} Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.225019 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40656c76-f405-4d8a-8b29-384ccae5068b" path="/var/lib/kubelet/pods/40656c76-f405-4d8a-8b29-384ccae5068b/volumes" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.402672 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.537392 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vwx\" (UniqueName: \"kubernetes.io/projected/57e445d6-0105-4d21-ac8b-812932e998a2-kube-api-access-h8vwx\") pod \"57e445d6-0105-4d21-ac8b-812932e998a2\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.537683 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-catalog-content\") pod \"57e445d6-0105-4d21-ac8b-812932e998a2\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.537951 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-utilities\") pod \"57e445d6-0105-4d21-ac8b-812932e998a2\" (UID: \"57e445d6-0105-4d21-ac8b-812932e998a2\") " Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.538656 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-utilities" (OuterVolumeSpecName: "utilities") pod "57e445d6-0105-4d21-ac8b-812932e998a2" (UID: "57e445d6-0105-4d21-ac8b-812932e998a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.550186 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e445d6-0105-4d21-ac8b-812932e998a2-kube-api-access-h8vwx" (OuterVolumeSpecName: "kube-api-access-h8vwx") pod "57e445d6-0105-4d21-ac8b-812932e998a2" (UID: "57e445d6-0105-4d21-ac8b-812932e998a2"). InnerVolumeSpecName "kube-api-access-h8vwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.640965 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vwx\" (UniqueName: \"kubernetes.io/projected/57e445d6-0105-4d21-ac8b-812932e998a2-kube-api-access-h8vwx\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.641012 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.694524 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57e445d6-0105-4d21-ac8b-812932e998a2" (UID: "57e445d6-0105-4d21-ac8b-812932e998a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.742905 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57e445d6-0105-4d21-ac8b-812932e998a2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.970611 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ff8ee982-1032-4f39-a0ca-6cf469230730","Type":"ContainerStarted","Data":"f6ec0d3e4574230936e53e87ee5e19bc478dc0188f4020820d2f846a711cd734"} Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.976522 4754 generic.go:334] "Generic (PLEG): container finished" podID="57e445d6-0105-4d21-ac8b-812932e998a2" containerID="d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a" exitCode=0 Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.976564 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgf5c" Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.976614 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgf5c" event={"ID":"57e445d6-0105-4d21-ac8b-812932e998a2","Type":"ContainerDied","Data":"d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a"} Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.976680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgf5c" event={"ID":"57e445d6-0105-4d21-ac8b-812932e998a2","Type":"ContainerDied","Data":"99a18ea8eeebe2618561c79430dead64b760ccea0a436aa86352f054d1c986d6"} Feb 18 19:42:00 crc kubenswrapper[4754]: I0218 19:42:00.976706 4754 scope.go:117] "RemoveContainer" containerID="d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.022467 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.022427646 podStartE2EDuration="3.022427646s" podCreationTimestamp="2026-02-18 19:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:42:01.003424328 +0000 UTC m=+1423.453837124" watchObservedRunningTime="2026-02-18 19:42:01.022427646 +0000 UTC m=+1423.472840482" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.035790 4754 scope.go:117] "RemoveContainer" containerID="3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.056066 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mgf5c"] Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.071096 4754 scope.go:117] "RemoveContainer" containerID="116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.071726 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mgf5c"] Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.142826 4754 scope.go:117] "RemoveContainer" containerID="d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a" Feb 18 19:42:01 crc kubenswrapper[4754]: E0218 19:42:01.143453 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a\": container with ID starting with d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a not found: ID does not exist" containerID="d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.143528 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a"} err="failed to get container status \"d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a\": rpc error: code = NotFound desc = could not find container \"d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a\": container with ID starting with d36a74bea1a858c6d25f80d05da1139bdb8d119c84ac4a304ef0b5a0563bce6a not found: ID does not exist" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.143565 4754 scope.go:117] "RemoveContainer" containerID="3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b" Feb 18 19:42:01 crc kubenswrapper[4754]: E0218 19:42:01.144070 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b\": container with ID starting with 3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b not found: ID does not exist" containerID="3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.144106 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b"} err="failed to get container status \"3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b\": rpc error: code = NotFound desc = could not find container \"3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b\": container with ID starting with 3b1e4deae6fee1b518fc91547ee21a885ae0ba358392b77b65e54724a087273b not found: ID does not exist" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.144134 4754 scope.go:117] "RemoveContainer" containerID="116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254" Feb 18 19:42:01 crc kubenswrapper[4754]: E0218 19:42:01.144507 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254\": container with ID starting with 116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254 not found: ID does not exist" containerID="116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254" Feb 18 19:42:01 crc kubenswrapper[4754]: I0218 19:42:01.144547 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254"} err="failed to get container status \"116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254\": rpc error: code = NotFound desc = could not find container \"116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254\": container with ID starting with 116d3f6f702d17b73c4983d1487dbabbac6aee7202630c27cbdfe657265b3254 not found: ID does not exist" Feb 18 19:42:02 crc kubenswrapper[4754]: I0218 19:42:02.221849 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" path="/var/lib/kubelet/pods/57e445d6-0105-4d21-ac8b-812932e998a2/volumes" Feb 18 19:42:02 crc kubenswrapper[4754]: I0218 19:42:02.296616 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:42:02 crc kubenswrapper[4754]: I0218 19:42:02.296675 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:42:04 crc kubenswrapper[4754]: I0218 19:42:04.362610 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:42:07 crc kubenswrapper[4754]: I0218 19:42:07.296300 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:42:07 crc kubenswrapper[4754]: I0218 19:42:07.296862 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:42:07 crc kubenswrapper[4754]: I0218 19:42:07.720233 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:42:07 crc kubenswrapper[4754]: I0218 19:42:07.720314 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:42:08 crc kubenswrapper[4754]: I0218 19:42:08.110965 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:42:08 crc kubenswrapper[4754]: I0218 19:42:08.111025 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:42:08 crc kubenswrapper[4754]: I0218 19:42:08.312423 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="85f34d7d-86e9-4a79-8b5b-59a6e36cf2be" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:42:08 crc kubenswrapper[4754]: I0218 19:42:08.312556 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="85f34d7d-86e9-4a79-8b5b-59a6e36cf2be" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:42:08 crc kubenswrapper[4754]: I0218 19:42:08.737373 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc56d60b-cef7-41a2-a20a-ee0464768793" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:42:08 crc kubenswrapper[4754]: I0218 19:42:08.737399 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc56d60b-cef7-41a2-a20a-ee0464768793" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:42:09 crc kubenswrapper[4754]: I0218 19:42:09.362555 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:42:09 crc kubenswrapper[4754]: I0218 19:42:09.399359 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:42:10 crc kubenswrapper[4754]: I0218 19:42:10.111006 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:42:10 crc kubenswrapper[4754]: I0218 19:42:10.982276 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.305093 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.308850 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.313714 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.730647 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.731454 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.735572 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:42:17 crc kubenswrapper[4754]: I0218 19:42:17.738921 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:42:18 crc kubenswrapper[4754]: I0218 19:42:18.160452 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:42:18 crc kubenswrapper[4754]: I0218 19:42:18.165364 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:42:18 crc kubenswrapper[4754]: I0218 19:42:18.169433 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:42:26 crc kubenswrapper[4754]: I0218 19:42:26.203967 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:42:27 crc kubenswrapper[4754]: I0218 19:42:27.850293 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:42:30 crc kubenswrapper[4754]: I0218 19:42:30.520191 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="rabbitmq" containerID="cri-o://b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8" gracePeriod=604796 Feb 18 19:42:30 crc kubenswrapper[4754]: I0218 19:42:30.659823 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 18 19:42:31 crc kubenswrapper[4754]: I0218 19:42:31.925256 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerName="rabbitmq" containerID="cri-o://a69dbc60091260ceda33a8fe29c15bed8145465523e90ca8c0e175f1a682f469" gracePeriod=604796 Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.109009 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.256821 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.256907 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqgnj\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-kube-api-access-kqgnj\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.256995 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-server-conf\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257075 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-plugins-conf\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257116 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-pod-info\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257171 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-config-data\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257213 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-tls\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257239 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-erlang-cookie-secret\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257304 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-plugins\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257329 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-confd\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257346 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-erlang-cookie\") pod \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\" (UID: \"3c266c06-8bfc-47ba-bab9-6ef36d6294e5\") " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.257907 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.258313 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.258729 4754 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.258747 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.260713 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.261989 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.262103 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-kube-api-access-kqgnj" (OuterVolumeSpecName: "kube-api-access-kqgnj") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "kube-api-access-kqgnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.265956 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.266165 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-pod-info" (OuterVolumeSpecName: "pod-info") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.273363 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.321572 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-config-data" (OuterVolumeSpecName: "config-data") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361524 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361566 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361577 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqgnj\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-kube-api-access-kqgnj\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361587 4754 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361597 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361606 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.361615 4754 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.367122 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-server-conf" (OuterVolumeSpecName: "server-conf") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.382621 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.389809 4754 generic.go:334] "Generic (PLEG): container finished" podID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerID="b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8" exitCode=0 Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.389879 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.391232 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c266c06-8bfc-47ba-bab9-6ef36d6294e5","Type":"ContainerDied","Data":"b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8"} Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.391375 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3c266c06-8bfc-47ba-bab9-6ef36d6294e5","Type":"ContainerDied","Data":"9d3f2eb31ffc3cb9f4e5addcd4bf23642796de12345e1f8f5fcc8823da201875"} Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.391437 4754 scope.go:117] "RemoveContainer" containerID="b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.420618 4754 scope.go:117] "RemoveContainer" containerID="8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.439420 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3c266c06-8bfc-47ba-bab9-6ef36d6294e5" (UID: "3c266c06-8bfc-47ba-bab9-6ef36d6294e5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.463937 4754 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.464319 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c266c06-8bfc-47ba-bab9-6ef36d6294e5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.464443 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.467339 4754 scope.go:117] "RemoveContainer" containerID="b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8" Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.467930 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8\": container with ID starting with b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8 not found: ID does not exist" containerID="b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.467984 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8"} err="failed to get container status \"b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8\": rpc error: code = NotFound desc = could not find container \"b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8\": container with ID starting with b2894d812696ce83d9192307210da0deb6fd293496355c1b160843dda45f1ad8 not found: ID does not exist" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.468019 4754 scope.go:117] "RemoveContainer" containerID="8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693" Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.468485 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693\": container with ID starting with 8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693 not found: ID does not exist" containerID="8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.468542 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693"} err="failed to get container status \"8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693\": rpc error: code = NotFound desc = could not find container \"8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693\": container with ID starting with 8efa9ed2f8ec070336ff0001d8bd4208dbc88caaf6c105078f6dc7a9d1a19693 not found: ID does not exist" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.732240 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.741865 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.771581 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.772018 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="registry-server" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772035 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="registry-server" Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.772059 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="setup-container" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772065 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="setup-container" Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.772077 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="extract-utilities" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772084 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="extract-utilities" Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.772099 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="rabbitmq" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772108 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="rabbitmq" Feb 18 19:42:37 crc kubenswrapper[4754]: E0218 19:42:37.772124 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="extract-content" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772130 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="extract-content" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772312 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" containerName="rabbitmq" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.772327 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e445d6-0105-4d21-ac8b-812932e998a2" containerName="registry-server" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.773625 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.776570 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.777006 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.777117 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.777176 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.777992 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.780304 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-86n4h" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.783638 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.788817 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871503 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871545 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871606 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871647 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f6a974-f41f-46fe-aa5d-4a32484863ee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871664 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871789 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871887 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.871917 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.872090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rlbv\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-kube-api-access-5rlbv\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.872163 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f6a974-f41f-46fe-aa5d-4a32484863ee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.872245 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-config-data\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.973754 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974071 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974198 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974346 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f6a974-f41f-46fe-aa5d-4a32484863ee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974258 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974715 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974879 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974935 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974960 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.974996 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rlbv\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-kube-api-access-5rlbv\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.975012 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f6a974-f41f-46fe-aa5d-4a32484863ee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.975072 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-config-data\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.975715 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.975802 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.975944 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.976011 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42f6a974-f41f-46fe-aa5d-4a32484863ee-config-data\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.981342 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.982097 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42f6a974-f41f-46fe-aa5d-4a32484863ee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.983764 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.987112 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42f6a974-f41f-46fe-aa5d-4a32484863ee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:37 crc kubenswrapper[4754]: I0218 19:42:37.994341 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rlbv\" (UniqueName: \"kubernetes.io/projected/42f6a974-f41f-46fe-aa5d-4a32484863ee-kube-api-access-5rlbv\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.011567 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"42f6a974-f41f-46fe-aa5d-4a32484863ee\") " pod="openstack/rabbitmq-server-0" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.097631 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.097684 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.137388 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.231030 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c266c06-8bfc-47ba-bab9-6ef36d6294e5" path="/var/lib/kubelet/pods/3c266c06-8bfc-47ba-bab9-6ef36d6294e5/volumes" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.424205 4754 generic.go:334] "Generic (PLEG): container finished" podID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerID="a69dbc60091260ceda33a8fe29c15bed8145465523e90ca8c0e175f1a682f469" exitCode=0 Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.424290 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d87128e7-abb0-4dd7-9b9f-04a4393c2313","Type":"ContainerDied","Data":"a69dbc60091260ceda33a8fe29c15bed8145465523e90ca8c0e175f1a682f469"} Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.523050 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.691531 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-tls\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.691869 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6hfj\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-kube-api-access-l6hfj\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.691902 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-config-data\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.691981 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-plugins-conf\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.691999 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d87128e7-abb0-4dd7-9b9f-04a4393c2313-erlang-cookie-secret\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.692066 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d87128e7-abb0-4dd7-9b9f-04a4393c2313-pod-info\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.692100 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-erlang-cookie\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.692129 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-confd\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.692169 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-server-conf\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.692238 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.692286 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-plugins\") pod \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\" (UID: \"d87128e7-abb0-4dd7-9b9f-04a4393c2313\") " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.694277 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.694520 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.695246 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.699755 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d87128e7-abb0-4dd7-9b9f-04a4393c2313-pod-info" (OuterVolumeSpecName: "pod-info") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.700107 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.700124 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.701734 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d87128e7-abb0-4dd7-9b9f-04a4393c2313-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.706059 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-kube-api-access-l6hfj" (OuterVolumeSpecName: "kube-api-access-l6hfj") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "kube-api-access-l6hfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.723133 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-config-data" (OuterVolumeSpecName: "config-data") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.732345 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.760794 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-server-conf" (OuterVolumeSpecName: "server-conf") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.795492 4754 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.796828 4754 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d87128e7-abb0-4dd7-9b9f-04a4393c2313-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.799970 4754 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d87128e7-abb0-4dd7-9b9f-04a4393c2313-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.799997 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.800011 4754 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.800043 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.800053 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.800063 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.800075 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6hfj\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-kube-api-access-l6hfj\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.800529 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d87128e7-abb0-4dd7-9b9f-04a4393c2313-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.824725 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.837663 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d87128e7-abb0-4dd7-9b9f-04a4393c2313" (UID: "d87128e7-abb0-4dd7-9b9f-04a4393c2313"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.902706 4754 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d87128e7-abb0-4dd7-9b9f-04a4393c2313-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:38 crc kubenswrapper[4754]: I0218 19:42:38.902946 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.439390 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f6a974-f41f-46fe-aa5d-4a32484863ee","Type":"ContainerStarted","Data":"ef7d9a68aab6f93a0dc8e5dd51122e30f734c6caa983178a4417fac97eb6cfca"} Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.444653 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d87128e7-abb0-4dd7-9b9f-04a4393c2313","Type":"ContainerDied","Data":"33417fe67ee8035ef94b4b38be48aedc00fe82866b924d9be15b477aa9b93d4b"} Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.444813 4754 scope.go:117] "RemoveContainer" containerID="a69dbc60091260ceda33a8fe29c15bed8145465523e90ca8c0e175f1a682f469" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.444754 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.498288 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.502007 4754 scope.go:117] "RemoveContainer" containerID="8dba12de5efdeb76fc4a632f246c1d52ee9b8ba428ec33056f2bef2b2ac692e4" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.515595 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.557005 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:42:39 crc kubenswrapper[4754]: E0218 19:42:39.557601 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerName="setup-container" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.557629 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerName="setup-container" Feb 18 19:42:39 crc kubenswrapper[4754]: E0218 19:42:39.557653 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerName="rabbitmq" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.557663 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerName="rabbitmq" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.557899 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" containerName="rabbitmq" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.559234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.563306 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.563526 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.563702 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.563805 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.563975 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.564116 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8zt5q" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.564261 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.573887 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.722372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.722656 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.722675 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.722760 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.722782 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsr7l\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-kube-api-access-qsr7l\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.722910 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.723320 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.723603 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.723691 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.723740 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.723815 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828223 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828281 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828306 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828332 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828359 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828379 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828394 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828439 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.828736 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.829348 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.833996 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.834178 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsr7l\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-kube-api-access-qsr7l\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.834235 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.834375 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.835117 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.835688 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.836298 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.839950 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.839999 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.840024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.846269 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.852770 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsr7l\" (UniqueName: \"kubernetes.io/projected/df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4-kube-api-access-qsr7l\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.864060 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:39 crc kubenswrapper[4754]: I0218 19:42:39.906792 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.239657 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87128e7-abb0-4dd7-9b9f-04a4393c2313" path="/var/lib/kubelet/pods/d87128e7-abb0-4dd7-9b9f-04a4393c2313/volumes" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.240590 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r49bg"] Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.243095 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.247780 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.260757 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r49bg"] Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.344927 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.345232 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-svc\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.345259 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.345302 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.345372 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-config\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.345387 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5j4\" (UniqueName: \"kubernetes.io/projected/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-kube-api-access-2v5j4\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.345456 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: W0218 19:42:40.429734 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf9a370d_38f3_4b2f_ae1e_ecab9b8af6f4.slice/crio-5046da82ea8f4d4fa9b46a03132d7a68f21a9484638748f50ca4c0b1c627fa8d WatchSource:0}: Error finding container 5046da82ea8f4d4fa9b46a03132d7a68f21a9484638748f50ca4c0b1c627fa8d: Status 404 returned error can't find the container with id 5046da82ea8f4d4fa9b46a03132d7a68f21a9484638748f50ca4c0b1c627fa8d Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.429821 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449587 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-config\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449648 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5j4\" (UniqueName: \"kubernetes.io/projected/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-kube-api-access-2v5j4\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449755 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449822 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449887 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-svc\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449917 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.449957 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.451183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.451582 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.451895 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.452135 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-config\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.452619 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-svc\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.452976 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.465037 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4","Type":"ContainerStarted","Data":"5046da82ea8f4d4fa9b46a03132d7a68f21a9484638748f50ca4c0b1c627fa8d"} Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.472615 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5j4\" (UniqueName: \"kubernetes.io/projected/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-kube-api-access-2v5j4\") pod \"dnsmasq-dns-d558885bc-r49bg\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:40 crc kubenswrapper[4754]: I0218 19:42:40.574578 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:41 crc kubenswrapper[4754]: I0218 19:42:41.027817 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r49bg"] Feb 18 19:42:41 crc kubenswrapper[4754]: I0218 19:42:41.476838 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f6a974-f41f-46fe-aa5d-4a32484863ee","Type":"ContainerStarted","Data":"b36d9c0c260aaba0c9e3b660a072f3e4e33b044faa505c87745c881cb92ccfa2"} Feb 18 19:42:41 crc kubenswrapper[4754]: I0218 19:42:41.480751 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r49bg" event={"ID":"1ac78e24-0019-4e9c-aaa4-5d405446bfe2","Type":"ContainerStarted","Data":"a537183ef6130155d2eee1852acf9de9ebfe8de70a8a2037b06dfc30ac59835f"} Feb 18 19:42:42 crc kubenswrapper[4754]: I0218 19:42:42.490237 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerID="44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8" exitCode=0 Feb 18 19:42:42 crc kubenswrapper[4754]: I0218 19:42:42.490496 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r49bg" event={"ID":"1ac78e24-0019-4e9c-aaa4-5d405446bfe2","Type":"ContainerDied","Data":"44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8"} Feb 18 19:42:42 crc kubenswrapper[4754]: I0218 19:42:42.504470 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4","Type":"ContainerStarted","Data":"83c9f9447451ab1b59fecd51f76e2d7cdaa73e45594b350ce20c9ad0dbe88a36"} Feb 18 19:42:43 crc kubenswrapper[4754]: I0218 19:42:43.516953 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r49bg" event={"ID":"1ac78e24-0019-4e9c-aaa4-5d405446bfe2","Type":"ContainerStarted","Data":"826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07"} Feb 18 19:42:43 crc kubenswrapper[4754]: I0218 19:42:43.544773 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-r49bg" podStartSLOduration=3.54474841 podStartE2EDuration="3.54474841s" podCreationTimestamp="2026-02-18 19:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:42:43.538076054 +0000 UTC m=+1465.988488860" watchObservedRunningTime="2026-02-18 19:42:43.54474841 +0000 UTC m=+1465.995161216" Feb 18 19:42:44 crc kubenswrapper[4754]: I0218 19:42:44.526926 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.572412 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.676101 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kp4c9"] Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.676448 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerName="dnsmasq-dns" containerID="cri-o://482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367" gracePeriod=10 Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.846236 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-mlgwf"] Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.848549 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.866697 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-mlgwf"] Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.997606 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.998444 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.998507 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.998537 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xmz\" (UniqueName: \"kubernetes.io/projected/372e72f8-279c-4f8a-9979-4effbf27adf7-kube-api-access-k5xmz\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.998577 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-config\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.998762 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:50 crc kubenswrapper[4754]: I0218 19:42:50.998811 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.100572 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.100897 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.100924 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.100944 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xmz\" (UniqueName: \"kubernetes.io/projected/372e72f8-279c-4f8a-9979-4effbf27adf7-kube-api-access-k5xmz\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.100967 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-config\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.101058 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.101088 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.101861 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.102130 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-openstack-edpm-ipam\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.102441 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.102874 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.103048 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-config\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.103266 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/372e72f8-279c-4f8a-9979-4effbf27adf7-dns-svc\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.131811 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xmz\" (UniqueName: \"kubernetes.io/projected/372e72f8-279c-4f8a-9979-4effbf27adf7-kube-api-access-k5xmz\") pod \"dnsmasq-dns-6b6dc74c5-mlgwf\" (UID: \"372e72f8-279c-4f8a-9979-4effbf27adf7\") " pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.179635 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.320550 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.510417 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-swift-storage-0\") pod \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.510752 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwdgn\" (UniqueName: \"kubernetes.io/projected/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-kube-api-access-vwdgn\") pod \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.510788 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-nb\") pod \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.510826 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-config\") pod \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.510842 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-sb\") pod \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.511002 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-svc\") pod \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\" (UID: \"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96\") " Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.516244 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-kube-api-access-vwdgn" (OuterVolumeSpecName: "kube-api-access-vwdgn") pod "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" (UID: "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96"). InnerVolumeSpecName "kube-api-access-vwdgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.560227 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" (UID: "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.571145 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" (UID: "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.573672 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" (UID: "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.573934 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" (UID: "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.575711 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-config" (OuterVolumeSpecName: "config") pod "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" (UID: "d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.613890 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.613931 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdgn\" (UniqueName: \"kubernetes.io/projected/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-kube-api-access-vwdgn\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.613944 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.613954 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.613963 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.613974 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.670818 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b6dc74c5-mlgwf"] Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.787212 4754 generic.go:334] "Generic (PLEG): container finished" podID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerID="482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367" exitCode=0 Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.787298 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" event={"ID":"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96","Type":"ContainerDied","Data":"482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367"} Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.787325 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.787362 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-kp4c9" event={"ID":"d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96","Type":"ContainerDied","Data":"50d71b86f9c3bc537018cb782c4b9053d403b87c0eb80abd91b3c9fcf49363a3"} Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.787383 4754 scope.go:117] "RemoveContainer" containerID="482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.789386 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" event={"ID":"372e72f8-279c-4f8a-9979-4effbf27adf7","Type":"ContainerStarted","Data":"3f969da4ad2a4f60940199e2f1fd9ea0339f86b3cdf73d529ca72c9de0278329"} Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.883546 4754 scope.go:117] "RemoveContainer" containerID="a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.921343 4754 scope.go:117] "RemoveContainer" containerID="482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367" Feb 18 19:42:51 crc kubenswrapper[4754]: E0218 19:42:51.922033 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367\": container with ID starting with 482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367 not found: ID does not exist" containerID="482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.922082 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367"} err="failed to get container status \"482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367\": rpc error: code = NotFound desc = could not find container \"482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367\": container with ID starting with 482dac0ad38877742aba98a2faac37810f077741e8c90168c2a70b49959ed367 not found: ID does not exist" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.922114 4754 scope.go:117] "RemoveContainer" containerID="a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6" Feb 18 19:42:51 crc kubenswrapper[4754]: E0218 19:42:51.922585 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6\": container with ID starting with a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6 not found: ID does not exist" containerID="a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.922649 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6"} err="failed to get container status \"a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6\": rpc error: code = NotFound desc = could not find container \"a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6\": container with ID starting with a9242d0016dc3ecb80e0f210244185bdd55942dea6569c531befc00e9c62e8c6 not found: ID does not exist" Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.928147 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kp4c9"] Feb 18 19:42:51 crc kubenswrapper[4754]: I0218 19:42:51.940056 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-kp4c9"] Feb 18 19:42:52 crc kubenswrapper[4754]: I0218 19:42:52.224886 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" path="/var/lib/kubelet/pods/d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96/volumes" Feb 18 19:42:52 crc kubenswrapper[4754]: I0218 19:42:52.802244 4754 generic.go:334] "Generic (PLEG): container finished" podID="372e72f8-279c-4f8a-9979-4effbf27adf7" containerID="a32304affdfeaf5167afd678c8b058650a0f65cd82e36353c132421ddaca0064" exitCode=0 Feb 18 19:42:52 crc kubenswrapper[4754]: I0218 19:42:52.802293 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" event={"ID":"372e72f8-279c-4f8a-9979-4effbf27adf7","Type":"ContainerDied","Data":"a32304affdfeaf5167afd678c8b058650a0f65cd82e36353c132421ddaca0064"} Feb 18 19:42:53 crc kubenswrapper[4754]: I0218 19:42:53.816847 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" event={"ID":"372e72f8-279c-4f8a-9979-4effbf27adf7","Type":"ContainerStarted","Data":"98b5837c3b48cfb6d8b4d40499fd2618082538d4913f745746b03f4293d84934"} Feb 18 19:42:53 crc kubenswrapper[4754]: I0218 19:42:53.817542 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:42:53 crc kubenswrapper[4754]: I0218 19:42:53.846714 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" podStartSLOduration=3.846694366 podStartE2EDuration="3.846694366s" podCreationTimestamp="2026-02-18 19:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:42:53.840277588 +0000 UTC m=+1476.290690384" watchObservedRunningTime="2026-02-18 19:42:53.846694366 +0000 UTC m=+1476.297107162" Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.181428 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b6dc74c5-mlgwf" Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.279542 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r49bg"] Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.279914 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-r49bg" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerName="dnsmasq-dns" containerID="cri-o://826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07" gracePeriod=10 Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.791599 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.920542 4754 generic.go:334] "Generic (PLEG): container finished" podID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerID="826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07" exitCode=0 Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.920616 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-r49bg" Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.920635 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r49bg" event={"ID":"1ac78e24-0019-4e9c-aaa4-5d405446bfe2","Type":"ContainerDied","Data":"826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07"} Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.921055 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-r49bg" event={"ID":"1ac78e24-0019-4e9c-aaa4-5d405446bfe2","Type":"ContainerDied","Data":"a537183ef6130155d2eee1852acf9de9ebfe8de70a8a2037b06dfc30ac59835f"} Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.921081 4754 scope.go:117] "RemoveContainer" containerID="826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07" Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.942537 4754 scope.go:117] "RemoveContainer" containerID="44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8" Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944072 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-nb\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944189 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-swift-storage-0\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944272 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-sb\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944299 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v5j4\" (UniqueName: \"kubernetes.io/projected/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-kube-api-access-2v5j4\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944321 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-svc\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944395 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-config\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.944492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-openstack-edpm-ipam\") pod \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\" (UID: \"1ac78e24-0019-4e9c-aaa4-5d405446bfe2\") " Feb 18 19:43:01 crc kubenswrapper[4754]: I0218 19:43:01.949887 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-kube-api-access-2v5j4" (OuterVolumeSpecName: "kube-api-access-2v5j4") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "kube-api-access-2v5j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.004921 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.006482 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.008551 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.013410 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.017294 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.019026 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-config" (OuterVolumeSpecName: "config") pod "1ac78e24-0019-4e9c-aaa4-5d405446bfe2" (UID: "1ac78e24-0019-4e9c-aaa4-5d405446bfe2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047786 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047836 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047850 4754 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047862 4754 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047879 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v5j4\" (UniqueName: \"kubernetes.io/projected/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-kube-api-access-2v5j4\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047893 4754 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.047904 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac78e24-0019-4e9c-aaa4-5d405446bfe2-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.087704 4754 scope.go:117] "RemoveContainer" containerID="826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07" Feb 18 19:43:02 crc kubenswrapper[4754]: E0218 19:43:02.088214 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07\": container with ID starting with 826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07 not found: ID does not exist" containerID="826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.088256 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07"} err="failed to get container status \"826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07\": rpc error: code = NotFound desc = could not find container \"826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07\": container with ID starting with 826dece4115bc53032e8ea09085d417f83d395d90391fba56c0b7bd0343e2f07 not found: ID does not exist" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.088280 4754 scope.go:117] "RemoveContainer" containerID="44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8" Feb 18 19:43:02 crc kubenswrapper[4754]: E0218 19:43:02.088744 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8\": container with ID starting with 44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8 not found: ID does not exist" containerID="44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.088771 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8"} err="failed to get container status \"44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8\": rpc error: code = NotFound desc = could not find container \"44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8\": container with ID starting with 44440439ad879d701c45586644fadcd5fb88d3f344c7fe9fa787a3e7c1e77da8 not found: ID does not exist" Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.269860 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r49bg"] Feb 18 19:43:02 crc kubenswrapper[4754]: I0218 19:43:02.281971 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-r49bg"] Feb 18 19:43:04 crc kubenswrapper[4754]: I0218 19:43:04.229364 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" path="/var/lib/kubelet/pods/1ac78e24-0019-4e9c-aaa4-5d405446bfe2/volumes" Feb 18 19:43:08 crc kubenswrapper[4754]: I0218 19:43:08.096560 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:43:08 crc kubenswrapper[4754]: I0218 19:43:08.096848 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:43:08 crc kubenswrapper[4754]: I0218 19:43:08.096905 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:43:08 crc kubenswrapper[4754]: I0218 19:43:08.097905 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:43:08 crc kubenswrapper[4754]: I0218 19:43:08.098081 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" gracePeriod=600 Feb 18 19:43:08 crc kubenswrapper[4754]: E0218 19:43:08.224406 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:43:09 crc kubenswrapper[4754]: I0218 19:43:09.000217 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" exitCode=0 Feb 18 19:43:09 crc kubenswrapper[4754]: I0218 19:43:09.000309 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333"} Feb 18 19:43:09 crc kubenswrapper[4754]: I0218 19:43:09.001979 4754 scope.go:117] "RemoveContainer" containerID="96444caa3510b8204a97c50f5062d060301a59e158a321374c108effb01ab6a8" Feb 18 19:43:09 crc kubenswrapper[4754]: I0218 19:43:09.002838 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:43:09 crc kubenswrapper[4754]: E0218 19:43:09.003389 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:43:13 crc kubenswrapper[4754]: I0218 19:43:13.051026 4754 generic.go:334] "Generic (PLEG): container finished" podID="42f6a974-f41f-46fe-aa5d-4a32484863ee" containerID="b36d9c0c260aaba0c9e3b660a072f3e4e33b044faa505c87745c881cb92ccfa2" exitCode=0 Feb 18 19:43:13 crc kubenswrapper[4754]: I0218 19:43:13.051100 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f6a974-f41f-46fe-aa5d-4a32484863ee","Type":"ContainerDied","Data":"b36d9c0c260aaba0c9e3b660a072f3e4e33b044faa505c87745c881cb92ccfa2"} Feb 18 19:43:14 crc kubenswrapper[4754]: I0218 19:43:14.063215 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42f6a974-f41f-46fe-aa5d-4a32484863ee","Type":"ContainerStarted","Data":"f9145094903e9752ac3083a1eef7206eb4735ae42d677c3e1b1326c0f1aa40d1"} Feb 18 19:43:14 crc kubenswrapper[4754]: I0218 19:43:14.063954 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:43:14 crc kubenswrapper[4754]: I0218 19:43:14.096239 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.096130779 podStartE2EDuration="37.096130779s" podCreationTimestamp="2026-02-18 19:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:14.092084723 +0000 UTC m=+1496.542497559" watchObservedRunningTime="2026-02-18 19:43:14.096130779 +0000 UTC m=+1496.546543585" Feb 18 19:43:14 crc kubenswrapper[4754]: E0218 19:43:14.744485 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf9a370d_38f3_4b2f_ae1e_ecab9b8af6f4.slice/crio-conmon-83c9f9447451ab1b59fecd51f76e2d7cdaa73e45594b350ce20c9ad0dbe88a36.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:43:15 crc kubenswrapper[4754]: I0218 19:43:15.076349 4754 generic.go:334] "Generic (PLEG): container finished" podID="df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4" containerID="83c9f9447451ab1b59fecd51f76e2d7cdaa73e45594b350ce20c9ad0dbe88a36" exitCode=0 Feb 18 19:43:15 crc kubenswrapper[4754]: I0218 19:43:15.077582 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4","Type":"ContainerDied","Data":"83c9f9447451ab1b59fecd51f76e2d7cdaa73e45594b350ce20c9ad0dbe88a36"} Feb 18 19:43:16 crc kubenswrapper[4754]: I0218 19:43:16.092578 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"df9a370d-38f3-4b2f-ae1e-ecab9b8af6f4","Type":"ContainerStarted","Data":"486d2f165d4f9545bfbe9d23969904ec0422e5c9289293f61e4e06ab11198fbe"} Feb 18 19:43:16 crc kubenswrapper[4754]: I0218 19:43:16.093085 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:43:16 crc kubenswrapper[4754]: I0218 19:43:16.121368 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.121338346 podStartE2EDuration="37.121338346s" podCreationTimestamp="2026-02-18 19:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:43:16.12084118 +0000 UTC m=+1498.571254006" watchObservedRunningTime="2026-02-18 19:43:16.121338346 +0000 UTC m=+1498.571751142" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.863622 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8"] Feb 18 19:43:19 crc kubenswrapper[4754]: E0218 19:43:19.864866 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerName="dnsmasq-dns" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.864901 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerName="dnsmasq-dns" Feb 18 19:43:19 crc kubenswrapper[4754]: E0218 19:43:19.864927 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerName="dnsmasq-dns" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.864941 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerName="dnsmasq-dns" Feb 18 19:43:19 crc kubenswrapper[4754]: E0218 19:43:19.864972 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerName="init" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.864980 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerName="init" Feb 18 19:43:19 crc kubenswrapper[4754]: E0218 19:43:19.864997 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerName="init" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.865008 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerName="init" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.865361 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6bfc7e0-35f6-4b69-bb0c-0f0077a18c96" containerName="dnsmasq-dns" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.865389 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac78e24-0019-4e9c-aaa4-5d405446bfe2" containerName="dnsmasq-dns" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.867665 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.872230 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.872739 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.873299 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.873348 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.890560 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8"] Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.936593 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kq8\" (UniqueName: \"kubernetes.io/projected/2533079d-ce84-4c49-b6cd-424050d009ca-kube-api-access-67kq8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.936915 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.936993 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:19 crc kubenswrapper[4754]: I0218 19:43:19.937244 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.039476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.039562 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kq8\" (UniqueName: \"kubernetes.io/projected/2533079d-ce84-4c49-b6cd-424050d009ca-kube-api-access-67kq8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.039675 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.039713 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.045639 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.048579 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.049344 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.062618 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kq8\" (UniqueName: \"kubernetes.io/projected/2533079d-ce84-4c49-b6cd-424050d009ca-kube-api-access-67kq8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.192345 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:20 crc kubenswrapper[4754]: W0218 19:43:20.842689 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2533079d_ce84_4c49_b6cd_424050d009ca.slice/crio-d178075b88a10c48d4af5a9210841cb36e7ea5695426e1d4be74144dce778763 WatchSource:0}: Error finding container d178075b88a10c48d4af5a9210841cb36e7ea5695426e1d4be74144dce778763: Status 404 returned error can't find the container with id d178075b88a10c48d4af5a9210841cb36e7ea5695426e1d4be74144dce778763 Feb 18 19:43:20 crc kubenswrapper[4754]: I0218 19:43:20.849621 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8"] Feb 18 19:43:21 crc kubenswrapper[4754]: I0218 19:43:21.159467 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" event={"ID":"2533079d-ce84-4c49-b6cd-424050d009ca","Type":"ContainerStarted","Data":"d178075b88a10c48d4af5a9210841cb36e7ea5695426e1d4be74144dce778763"} Feb 18 19:43:24 crc kubenswrapper[4754]: I0218 19:43:24.210050 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:43:24 crc kubenswrapper[4754]: E0218 19:43:24.210725 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:43:28 crc kubenswrapper[4754]: I0218 19:43:28.140307 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:43:29 crc kubenswrapper[4754]: I0218 19:43:29.910400 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:43:34 crc kubenswrapper[4754]: I0218 19:43:34.248829 4754 scope.go:117] "RemoveContainer" containerID="42ecf3304606833ca11434c61a2b5c09933654690e018153c84cb8ee83d97cfe" Feb 18 19:43:34 crc kubenswrapper[4754]: I0218 19:43:34.289586 4754 scope.go:117] "RemoveContainer" containerID="8757c0041c521b210d90d1dfd35bd4fe1cb8d48b9d64d682a7635c2ad0362d92" Feb 18 19:43:34 crc kubenswrapper[4754]: I0218 19:43:34.337019 4754 scope.go:117] "RemoveContainer" containerID="71ffccd940dea48b446679dc068d8116f6fcd39ccb2efee571b88e7d3f37c452" Feb 18 19:43:34 crc kubenswrapper[4754]: I0218 19:43:34.360211 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" event={"ID":"2533079d-ce84-4c49-b6cd-424050d009ca","Type":"ContainerStarted","Data":"7ec322a69888a5eb656d8152531b43ccab6063c93bb9085ef579f4e2e8a1e029"} Feb 18 19:43:34 crc kubenswrapper[4754]: I0218 19:43:34.385530 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" podStartSLOduration=3.107859047 podStartE2EDuration="15.385503167s" podCreationTimestamp="2026-02-18 19:43:19 +0000 UTC" firstStartedPulling="2026-02-18 19:43:20.845073253 +0000 UTC m=+1503.295486049" lastFinishedPulling="2026-02-18 19:43:33.122717363 +0000 UTC m=+1515.573130169" observedRunningTime="2026-02-18 19:43:34.381349408 +0000 UTC m=+1516.831762214" watchObservedRunningTime="2026-02-18 19:43:34.385503167 +0000 UTC m=+1516.835915963" Feb 18 19:43:34 crc kubenswrapper[4754]: I0218 19:43:34.408853 4754 scope.go:117] "RemoveContainer" containerID="a18468bd4755cfe83e3d66e613b0bd58c6cd0663fa300adb40aecd0b9c589f74" Feb 18 19:43:37 crc kubenswrapper[4754]: I0218 19:43:37.210080 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:43:37 crc kubenswrapper[4754]: E0218 19:43:37.211586 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:43:44 crc kubenswrapper[4754]: I0218 19:43:44.502311 4754 generic.go:334] "Generic (PLEG): container finished" podID="2533079d-ce84-4c49-b6cd-424050d009ca" containerID="7ec322a69888a5eb656d8152531b43ccab6063c93bb9085ef579f4e2e8a1e029" exitCode=0 Feb 18 19:43:44 crc kubenswrapper[4754]: I0218 19:43:44.502948 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" event={"ID":"2533079d-ce84-4c49-b6cd-424050d009ca","Type":"ContainerDied","Data":"7ec322a69888a5eb656d8152531b43ccab6063c93bb9085ef579f4e2e8a1e029"} Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.008949 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.130113 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-inventory\") pod \"2533079d-ce84-4c49-b6cd-424050d009ca\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.130255 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-repo-setup-combined-ca-bundle\") pod \"2533079d-ce84-4c49-b6cd-424050d009ca\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.130414 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-ssh-key-openstack-edpm-ipam\") pod \"2533079d-ce84-4c49-b6cd-424050d009ca\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.130584 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67kq8\" (UniqueName: \"kubernetes.io/projected/2533079d-ce84-4c49-b6cd-424050d009ca-kube-api-access-67kq8\") pod \"2533079d-ce84-4c49-b6cd-424050d009ca\" (UID: \"2533079d-ce84-4c49-b6cd-424050d009ca\") " Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.135906 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2533079d-ce84-4c49-b6cd-424050d009ca" (UID: "2533079d-ce84-4c49-b6cd-424050d009ca"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.139545 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2533079d-ce84-4c49-b6cd-424050d009ca-kube-api-access-67kq8" (OuterVolumeSpecName: "kube-api-access-67kq8") pod "2533079d-ce84-4c49-b6cd-424050d009ca" (UID: "2533079d-ce84-4c49-b6cd-424050d009ca"). InnerVolumeSpecName "kube-api-access-67kq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.158187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2533079d-ce84-4c49-b6cd-424050d009ca" (UID: "2533079d-ce84-4c49-b6cd-424050d009ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.158316 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-inventory" (OuterVolumeSpecName: "inventory") pod "2533079d-ce84-4c49-b6cd-424050d009ca" (UID: "2533079d-ce84-4c49-b6cd-424050d009ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.233759 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67kq8\" (UniqueName: \"kubernetes.io/projected/2533079d-ce84-4c49-b6cd-424050d009ca-kube-api-access-67kq8\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.234048 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.234123 4754 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.234372 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2533079d-ce84-4c49-b6cd-424050d009ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.525246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" event={"ID":"2533079d-ce84-4c49-b6cd-424050d009ca","Type":"ContainerDied","Data":"d178075b88a10c48d4af5a9210841cb36e7ea5695426e1d4be74144dce778763"} Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.525301 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d178075b88a10c48d4af5a9210841cb36e7ea5695426e1d4be74144dce778763" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.525489 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bxlq8" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.639106 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4"] Feb 18 19:43:46 crc kubenswrapper[4754]: E0218 19:43:46.639706 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2533079d-ce84-4c49-b6cd-424050d009ca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.639729 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="2533079d-ce84-4c49-b6cd-424050d009ca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.639988 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="2533079d-ce84-4c49-b6cd-424050d009ca" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.640848 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.643247 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.643589 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.643830 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.643978 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.649104 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4"] Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.744795 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.744939 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.744985 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhr2\" (UniqueName: \"kubernetes.io/projected/de85ee81-b453-40cb-9c7d-e00236dccfea-kube-api-access-6bhr2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.846460 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.846768 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bhr2\" (UniqueName: \"kubernetes.io/projected/de85ee81-b453-40cb-9c7d-e00236dccfea-kube-api-access-6bhr2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.846846 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.850600 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:46 crc kubenswrapper[4754]: I0218 19:43:46.850855 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:47 crc kubenswrapper[4754]: I0218 19:43:47.227463 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bhr2\" (UniqueName: \"kubernetes.io/projected/de85ee81-b453-40cb-9c7d-e00236dccfea-kube-api-access-6bhr2\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9qpb4\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:47 crc kubenswrapper[4754]: I0218 19:43:47.258860 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:47 crc kubenswrapper[4754]: I0218 19:43:47.820664 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4"] Feb 18 19:43:48 crc kubenswrapper[4754]: I0218 19:43:48.551827 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" event={"ID":"de85ee81-b453-40cb-9c7d-e00236dccfea","Type":"ContainerStarted","Data":"bb0fad1e746ea9a5b536c57944fd93aeaee712df101da778fb3ff21687496eb5"} Feb 18 19:43:49 crc kubenswrapper[4754]: I0218 19:43:49.565706 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" event={"ID":"de85ee81-b453-40cb-9c7d-e00236dccfea","Type":"ContainerStarted","Data":"67f625df4b665f57b0bc52f5a1f562ec390098b30abaa38b05503902a938486b"} Feb 18 19:43:49 crc kubenswrapper[4754]: I0218 19:43:49.596817 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" podStartSLOduration=3.037250799 podStartE2EDuration="3.596789554s" podCreationTimestamp="2026-02-18 19:43:46 +0000 UTC" firstStartedPulling="2026-02-18 19:43:47.82420254 +0000 UTC m=+1530.274615376" lastFinishedPulling="2026-02-18 19:43:48.383741335 +0000 UTC m=+1530.834154131" observedRunningTime="2026-02-18 19:43:49.589432473 +0000 UTC m=+1532.039845269" watchObservedRunningTime="2026-02-18 19:43:49.596789554 +0000 UTC m=+1532.047202360" Feb 18 19:43:51 crc kubenswrapper[4754]: I0218 19:43:51.210377 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:43:51 crc kubenswrapper[4754]: E0218 19:43:51.210904 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:43:51 crc kubenswrapper[4754]: I0218 19:43:51.588814 4754 generic.go:334] "Generic (PLEG): container finished" podID="de85ee81-b453-40cb-9c7d-e00236dccfea" containerID="67f625df4b665f57b0bc52f5a1f562ec390098b30abaa38b05503902a938486b" exitCode=0 Feb 18 19:43:51 crc kubenswrapper[4754]: I0218 19:43:51.588896 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" event={"ID":"de85ee81-b453-40cb-9c7d-e00236dccfea","Type":"ContainerDied","Data":"67f625df4b665f57b0bc52f5a1f562ec390098b30abaa38b05503902a938486b"} Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.122288 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.179492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-inventory\") pod \"de85ee81-b453-40cb-9c7d-e00236dccfea\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.179641 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bhr2\" (UniqueName: \"kubernetes.io/projected/de85ee81-b453-40cb-9c7d-e00236dccfea-kube-api-access-6bhr2\") pod \"de85ee81-b453-40cb-9c7d-e00236dccfea\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.179887 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-ssh-key-openstack-edpm-ipam\") pod \"de85ee81-b453-40cb-9c7d-e00236dccfea\" (UID: \"de85ee81-b453-40cb-9c7d-e00236dccfea\") " Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.186420 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de85ee81-b453-40cb-9c7d-e00236dccfea-kube-api-access-6bhr2" (OuterVolumeSpecName: "kube-api-access-6bhr2") pod "de85ee81-b453-40cb-9c7d-e00236dccfea" (UID: "de85ee81-b453-40cb-9c7d-e00236dccfea"). InnerVolumeSpecName "kube-api-access-6bhr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.212172 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-inventory" (OuterVolumeSpecName: "inventory") pod "de85ee81-b453-40cb-9c7d-e00236dccfea" (UID: "de85ee81-b453-40cb-9c7d-e00236dccfea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.214072 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de85ee81-b453-40cb-9c7d-e00236dccfea" (UID: "de85ee81-b453-40cb-9c7d-e00236dccfea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.282840 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.282870 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de85ee81-b453-40cb-9c7d-e00236dccfea-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.282880 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bhr2\" (UniqueName: \"kubernetes.io/projected/de85ee81-b453-40cb-9c7d-e00236dccfea-kube-api-access-6bhr2\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.625655 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.625652 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9qpb4" event={"ID":"de85ee81-b453-40cb-9c7d-e00236dccfea","Type":"ContainerDied","Data":"bb0fad1e746ea9a5b536c57944fd93aeaee712df101da778fb3ff21687496eb5"} Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.626225 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb0fad1e746ea9a5b536c57944fd93aeaee712df101da778fb3ff21687496eb5" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.723655 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx"] Feb 18 19:43:53 crc kubenswrapper[4754]: E0218 19:43:53.724122 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de85ee81-b453-40cb-9c7d-e00236dccfea" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.724165 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="de85ee81-b453-40cb-9c7d-e00236dccfea" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.724383 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="de85ee81-b453-40cb-9c7d-e00236dccfea" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.725110 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.728730 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.729041 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.730496 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.735515 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.741762 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx"] Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.800312 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.800482 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zs9\" (UniqueName: \"kubernetes.io/projected/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-kube-api-access-h4zs9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.800544 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.800602 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.903431 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.903579 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.904369 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.904582 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zs9\" (UniqueName: \"kubernetes.io/projected/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-kube-api-access-h4zs9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.909426 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.910215 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.918743 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:53 crc kubenswrapper[4754]: I0218 19:43:53.921284 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zs9\" (UniqueName: \"kubernetes.io/projected/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-kube-api-access-h4zs9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:54 crc kubenswrapper[4754]: I0218 19:43:54.082234 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:43:54 crc kubenswrapper[4754]: I0218 19:43:54.661811 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx"] Feb 18 19:43:55 crc kubenswrapper[4754]: I0218 19:43:55.650423 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" event={"ID":"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5","Type":"ContainerStarted","Data":"5e706312ccbe425b22b5b58f26be68c349e76f1790f573d9dd12a6ca93349993"} Feb 18 19:43:55 crc kubenswrapper[4754]: I0218 19:43:55.650751 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" event={"ID":"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5","Type":"ContainerStarted","Data":"cff76349eba9fb71cb1abacf68efd6a091f952d14aea1284e55f76e2a2d32af7"} Feb 18 19:43:55 crc kubenswrapper[4754]: I0218 19:43:55.691338 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" podStartSLOduration=2.275735883 podStartE2EDuration="2.691311335s" podCreationTimestamp="2026-02-18 19:43:53 +0000 UTC" firstStartedPulling="2026-02-18 19:43:54.673061969 +0000 UTC m=+1537.123474795" lastFinishedPulling="2026-02-18 19:43:55.088637431 +0000 UTC m=+1537.539050247" observedRunningTime="2026-02-18 19:43:55.68284819 +0000 UTC m=+1538.133260986" watchObservedRunningTime="2026-02-18 19:43:55.691311335 +0000 UTC m=+1538.141724151" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.209796 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:44:06 crc kubenswrapper[4754]: E0218 19:44:06.211973 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.384588 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ww6c6"] Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.386981 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.399700 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww6c6"] Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.493065 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hg7j\" (UniqueName: \"kubernetes.io/projected/bc7642d2-e76d-4ed9-9453-a950916d1e7a-kube-api-access-8hg7j\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.493127 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-catalog-content\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.493239 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-utilities\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.594907 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-catalog-content\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.595039 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-utilities\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.595153 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hg7j\" (UniqueName: \"kubernetes.io/projected/bc7642d2-e76d-4ed9-9453-a950916d1e7a-kube-api-access-8hg7j\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.595528 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-catalog-content\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.595847 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-utilities\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.614062 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hg7j\" (UniqueName: \"kubernetes.io/projected/bc7642d2-e76d-4ed9-9453-a950916d1e7a-kube-api-access-8hg7j\") pod \"redhat-marketplace-ww6c6\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:06 crc kubenswrapper[4754]: I0218 19:44:06.706384 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:07 crc kubenswrapper[4754]: I0218 19:44:07.190990 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww6c6"] Feb 18 19:44:07 crc kubenswrapper[4754]: I0218 19:44:07.790336 4754 generic.go:334] "Generic (PLEG): container finished" podID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerID="f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316" exitCode=0 Feb 18 19:44:07 crc kubenswrapper[4754]: I0218 19:44:07.790407 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww6c6" event={"ID":"bc7642d2-e76d-4ed9-9453-a950916d1e7a","Type":"ContainerDied","Data":"f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316"} Feb 18 19:44:07 crc kubenswrapper[4754]: I0218 19:44:07.790455 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww6c6" event={"ID":"bc7642d2-e76d-4ed9-9453-a950916d1e7a","Type":"ContainerStarted","Data":"8a80a9f720ea87ad92ff06627b9ebebcedc0204725d1ce8df0ef838deee9954c"} Feb 18 19:44:09 crc kubenswrapper[4754]: I0218 19:44:09.823127 4754 generic.go:334] "Generic (PLEG): container finished" podID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerID="80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d" exitCode=0 Feb 18 19:44:09 crc kubenswrapper[4754]: I0218 19:44:09.823330 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww6c6" event={"ID":"bc7642d2-e76d-4ed9-9453-a950916d1e7a","Type":"ContainerDied","Data":"80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d"} Feb 18 19:44:11 crc kubenswrapper[4754]: I0218 19:44:11.847396 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww6c6" event={"ID":"bc7642d2-e76d-4ed9-9453-a950916d1e7a","Type":"ContainerStarted","Data":"62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019"} Feb 18 19:44:11 crc kubenswrapper[4754]: I0218 19:44:11.876786 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ww6c6" podStartSLOduration=2.8116545139999998 podStartE2EDuration="5.876759777s" podCreationTimestamp="2026-02-18 19:44:06 +0000 UTC" firstStartedPulling="2026-02-18 19:44:07.794107525 +0000 UTC m=+1550.244520311" lastFinishedPulling="2026-02-18 19:44:10.859212778 +0000 UTC m=+1553.309625574" observedRunningTime="2026-02-18 19:44:11.871622256 +0000 UTC m=+1554.322035052" watchObservedRunningTime="2026-02-18 19:44:11.876759777 +0000 UTC m=+1554.327172583" Feb 18 19:44:16 crc kubenswrapper[4754]: I0218 19:44:16.706883 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:16 crc kubenswrapper[4754]: I0218 19:44:16.708383 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:16 crc kubenswrapper[4754]: I0218 19:44:16.768236 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:17 crc kubenswrapper[4754]: I0218 19:44:17.352493 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:17 crc kubenswrapper[4754]: I0218 19:44:17.409128 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww6c6"] Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.305348 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ww6c6" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="registry-server" containerID="cri-o://62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019" gracePeriod=2 Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.829446 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.966297 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-utilities\") pod \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.966459 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hg7j\" (UniqueName: \"kubernetes.io/projected/bc7642d2-e76d-4ed9-9453-a950916d1e7a-kube-api-access-8hg7j\") pod \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.966643 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-catalog-content\") pod \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\" (UID: \"bc7642d2-e76d-4ed9-9453-a950916d1e7a\") " Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.967513 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-utilities" (OuterVolumeSpecName: "utilities") pod "bc7642d2-e76d-4ed9-9453-a950916d1e7a" (UID: "bc7642d2-e76d-4ed9-9453-a950916d1e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:44:19 crc kubenswrapper[4754]: I0218 19:44:19.980622 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7642d2-e76d-4ed9-9453-a950916d1e7a-kube-api-access-8hg7j" (OuterVolumeSpecName: "kube-api-access-8hg7j") pod "bc7642d2-e76d-4ed9-9453-a950916d1e7a" (UID: "bc7642d2-e76d-4ed9-9453-a950916d1e7a"). InnerVolumeSpecName "kube-api-access-8hg7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.006062 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc7642d2-e76d-4ed9-9453-a950916d1e7a" (UID: "bc7642d2-e76d-4ed9-9453-a950916d1e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.069619 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.069670 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc7642d2-e76d-4ed9-9453-a950916d1e7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.069684 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hg7j\" (UniqueName: \"kubernetes.io/projected/bc7642d2-e76d-4ed9-9453-a950916d1e7a-kube-api-access-8hg7j\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.320501 4754 generic.go:334] "Generic (PLEG): container finished" podID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerID="62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019" exitCode=0 Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.320549 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww6c6" event={"ID":"bc7642d2-e76d-4ed9-9453-a950916d1e7a","Type":"ContainerDied","Data":"62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019"} Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.320580 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww6c6" event={"ID":"bc7642d2-e76d-4ed9-9453-a950916d1e7a","Type":"ContainerDied","Data":"8a80a9f720ea87ad92ff06627b9ebebcedc0204725d1ce8df0ef838deee9954c"} Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.320584 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww6c6" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.320600 4754 scope.go:117] "RemoveContainer" containerID="62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.350492 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww6c6"] Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.357764 4754 scope.go:117] "RemoveContainer" containerID="80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.360651 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww6c6"] Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.383469 4754 scope.go:117] "RemoveContainer" containerID="f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.431310 4754 scope.go:117] "RemoveContainer" containerID="62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019" Feb 18 19:44:20 crc kubenswrapper[4754]: E0218 19:44:20.431987 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019\": container with ID starting with 62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019 not found: ID does not exist" containerID="62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.432043 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019"} err="failed to get container status \"62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019\": rpc error: code = NotFound desc = could not find container \"62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019\": container with ID starting with 62f102d28887835bf840bf5d85df902fe5c9041c60505db36f131622b1bf3019 not found: ID does not exist" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.432072 4754 scope.go:117] "RemoveContainer" containerID="80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d" Feb 18 19:44:20 crc kubenswrapper[4754]: E0218 19:44:20.432500 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d\": container with ID starting with 80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d not found: ID does not exist" containerID="80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.432582 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d"} err="failed to get container status \"80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d\": rpc error: code = NotFound desc = could not find container \"80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d\": container with ID starting with 80d83edeb055ae15f300815be32451e65d5d14a08ef6c41b42586dced4fbed6d not found: ID does not exist" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.432634 4754 scope.go:117] "RemoveContainer" containerID="f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316" Feb 18 19:44:20 crc kubenswrapper[4754]: E0218 19:44:20.433080 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316\": container with ID starting with f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316 not found: ID does not exist" containerID="f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316" Feb 18 19:44:20 crc kubenswrapper[4754]: I0218 19:44:20.433111 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316"} err="failed to get container status \"f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316\": rpc error: code = NotFound desc = could not find container \"f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316\": container with ID starting with f722ca9585e69faa6727734ee3ced2ecab7fa99c5bd9dc83e7c0ae0002ee0316 not found: ID does not exist" Feb 18 19:44:21 crc kubenswrapper[4754]: I0218 19:44:21.210350 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:44:21 crc kubenswrapper[4754]: E0218 19:44:21.211178 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:44:22 crc kubenswrapper[4754]: I0218 19:44:22.225419 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" path="/var/lib/kubelet/pods/bc7642d2-e76d-4ed9-9453-a950916d1e7a/volumes" Feb 18 19:44:34 crc kubenswrapper[4754]: I0218 19:44:34.598575 4754 scope.go:117] "RemoveContainer" containerID="e443f17e3a218266fdd89edbb2dc4b3bb8154614b82f61c282613b1c175747c9" Feb 18 19:44:34 crc kubenswrapper[4754]: I0218 19:44:34.764644 4754 scope.go:117] "RemoveContainer" containerID="b599a85daa6653dda778e99a7566028a8701f531634e4f9143f9bfdf3d1124f8" Feb 18 19:44:34 crc kubenswrapper[4754]: I0218 19:44:34.795248 4754 scope.go:117] "RemoveContainer" containerID="6373cbb734fe7f180fca59d2f0a4db6d337503aad8a4b3efddb13cd3da0f8e05" Feb 18 19:44:35 crc kubenswrapper[4754]: I0218 19:44:35.211086 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:44:35 crc kubenswrapper[4754]: E0218 19:44:35.211418 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:44:48 crc kubenswrapper[4754]: I0218 19:44:48.225301 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:44:48 crc kubenswrapper[4754]: E0218 19:44:48.226051 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:44:59 crc kubenswrapper[4754]: I0218 19:44:59.209728 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:44:59 crc kubenswrapper[4754]: E0218 19:44:59.210547 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.147900 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj"] Feb 18 19:45:00 crc kubenswrapper[4754]: E0218 19:45:00.149239 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="extract-content" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.149260 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="extract-content" Feb 18 19:45:00 crc kubenswrapper[4754]: E0218 19:45:00.149282 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="registry-server" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.149288 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="registry-server" Feb 18 19:45:00 crc kubenswrapper[4754]: E0218 19:45:00.149326 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="extract-utilities" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.149333 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="extract-utilities" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.149530 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7642d2-e76d-4ed9-9453-a950916d1e7a" containerName="registry-server" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.150437 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.152492 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.152947 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.171285 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj"] Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.260777 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbj9\" (UniqueName: \"kubernetes.io/projected/8d3689e2-a4de-4632-b52b-9af35b45af82-kube-api-access-8dbj9\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.260849 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d3689e2-a4de-4632-b52b-9af35b45af82-secret-volume\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.261337 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d3689e2-a4de-4632-b52b-9af35b45af82-config-volume\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.363650 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbj9\" (UniqueName: \"kubernetes.io/projected/8d3689e2-a4de-4632-b52b-9af35b45af82-kube-api-access-8dbj9\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.363728 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d3689e2-a4de-4632-b52b-9af35b45af82-secret-volume\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.363862 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d3689e2-a4de-4632-b52b-9af35b45af82-config-volume\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.364988 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d3689e2-a4de-4632-b52b-9af35b45af82-config-volume\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.376435 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d3689e2-a4de-4632-b52b-9af35b45af82-secret-volume\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.381351 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbj9\" (UniqueName: \"kubernetes.io/projected/8d3689e2-a4de-4632-b52b-9af35b45af82-kube-api-access-8dbj9\") pod \"collect-profiles-29524065-d7vvj\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.487561 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:00 crc kubenswrapper[4754]: I0218 19:45:00.956663 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj"] Feb 18 19:45:01 crc kubenswrapper[4754]: I0218 19:45:01.768435 4754 generic.go:334] "Generic (PLEG): container finished" podID="8d3689e2-a4de-4632-b52b-9af35b45af82" containerID="d6d953c9cb519e8055b7ade7cc78f173b764da950e740b6dd9defa28f1d937d3" exitCode=0 Feb 18 19:45:01 crc kubenswrapper[4754]: I0218 19:45:01.768493 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" event={"ID":"8d3689e2-a4de-4632-b52b-9af35b45af82","Type":"ContainerDied","Data":"d6d953c9cb519e8055b7ade7cc78f173b764da950e740b6dd9defa28f1d937d3"} Feb 18 19:45:01 crc kubenswrapper[4754]: I0218 19:45:01.768680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" event={"ID":"8d3689e2-a4de-4632-b52b-9af35b45af82","Type":"ContainerStarted","Data":"2c6c7fb3bca67728b4c9e67117fc4ac54953a53943250bfd15c68bca471368a9"} Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.097606 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.223774 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d3689e2-a4de-4632-b52b-9af35b45af82-secret-volume\") pod \"8d3689e2-a4de-4632-b52b-9af35b45af82\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.223953 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dbj9\" (UniqueName: \"kubernetes.io/projected/8d3689e2-a4de-4632-b52b-9af35b45af82-kube-api-access-8dbj9\") pod \"8d3689e2-a4de-4632-b52b-9af35b45af82\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.224005 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d3689e2-a4de-4632-b52b-9af35b45af82-config-volume\") pod \"8d3689e2-a4de-4632-b52b-9af35b45af82\" (UID: \"8d3689e2-a4de-4632-b52b-9af35b45af82\") " Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.224853 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d3689e2-a4de-4632-b52b-9af35b45af82-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d3689e2-a4de-4632-b52b-9af35b45af82" (UID: "8d3689e2-a4de-4632-b52b-9af35b45af82"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.230212 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3689e2-a4de-4632-b52b-9af35b45af82-kube-api-access-8dbj9" (OuterVolumeSpecName: "kube-api-access-8dbj9") pod "8d3689e2-a4de-4632-b52b-9af35b45af82" (UID: "8d3689e2-a4de-4632-b52b-9af35b45af82"). InnerVolumeSpecName "kube-api-access-8dbj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.235506 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3689e2-a4de-4632-b52b-9af35b45af82-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d3689e2-a4de-4632-b52b-9af35b45af82" (UID: "8d3689e2-a4de-4632-b52b-9af35b45af82"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.326494 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d3689e2-a4de-4632-b52b-9af35b45af82-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.326533 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dbj9\" (UniqueName: \"kubernetes.io/projected/8d3689e2-a4de-4632-b52b-9af35b45af82-kube-api-access-8dbj9\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.326548 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d3689e2-a4de-4632-b52b-9af35b45af82-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.794078 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" event={"ID":"8d3689e2-a4de-4632-b52b-9af35b45af82","Type":"ContainerDied","Data":"2c6c7fb3bca67728b4c9e67117fc4ac54953a53943250bfd15c68bca471368a9"} Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.794161 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6c7fb3bca67728b4c9e67117fc4ac54953a53943250bfd15c68bca471368a9" Feb 18 19:45:03 crc kubenswrapper[4754]: I0218 19:45:03.794167 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj" Feb 18 19:45:14 crc kubenswrapper[4754]: I0218 19:45:14.211456 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:45:14 crc kubenswrapper[4754]: E0218 19:45:14.212510 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:45:25 crc kubenswrapper[4754]: I0218 19:45:25.210263 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:45:25 crc kubenswrapper[4754]: E0218 19:45:25.211250 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:45:38 crc kubenswrapper[4754]: I0218 19:45:38.217464 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:45:38 crc kubenswrapper[4754]: E0218 19:45:38.218708 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:45:50 crc kubenswrapper[4754]: I0218 19:45:50.209659 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:45:50 crc kubenswrapper[4754]: E0218 19:45:50.210498 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:46:01 crc kubenswrapper[4754]: I0218 19:46:01.210866 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:46:01 crc kubenswrapper[4754]: E0218 19:46:01.212005 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:46:16 crc kubenswrapper[4754]: I0218 19:46:16.210614 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:46:16 crc kubenswrapper[4754]: E0218 19:46:16.211539 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:46:27 crc kubenswrapper[4754]: I0218 19:46:27.209886 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:46:27 crc kubenswrapper[4754]: E0218 19:46:27.211097 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:46:42 crc kubenswrapper[4754]: I0218 19:46:42.211097 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:46:42 crc kubenswrapper[4754]: E0218 19:46:42.211907 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:46:53 crc kubenswrapper[4754]: I0218 19:46:53.914366 4754 generic.go:334] "Generic (PLEG): container finished" podID="03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" containerID="5e706312ccbe425b22b5b58f26be68c349e76f1790f573d9dd12a6ca93349993" exitCode=0 Feb 18 19:46:53 crc kubenswrapper[4754]: I0218 19:46:53.915281 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" event={"ID":"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5","Type":"ContainerDied","Data":"5e706312ccbe425b22b5b58f26be68c349e76f1790f573d9dd12a6ca93349993"} Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.209893 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:46:55 crc kubenswrapper[4754]: E0218 19:46:55.210413 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.375625 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.434561 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-bootstrap-combined-ca-bundle\") pod \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.434628 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-ssh-key-openstack-edpm-ipam\") pod \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.434802 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4zs9\" (UniqueName: \"kubernetes.io/projected/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-kube-api-access-h4zs9\") pod \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.434845 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-inventory\") pod \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\" (UID: \"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5\") " Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.441210 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-kube-api-access-h4zs9" (OuterVolumeSpecName: "kube-api-access-h4zs9") pod "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" (UID: "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5"). InnerVolumeSpecName "kube-api-access-h4zs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.441435 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" (UID: "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.467317 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" (UID: "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.469094 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-inventory" (OuterVolumeSpecName: "inventory") pod "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" (UID: "03a8bff3-a589-4b74-aa9a-2dcf9a824eb5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.539719 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4zs9\" (UniqueName: \"kubernetes.io/projected/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-kube-api-access-h4zs9\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.539798 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.539815 4754 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.539832 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a8bff3-a589-4b74-aa9a-2dcf9a824eb5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.946018 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" event={"ID":"03a8bff3-a589-4b74-aa9a-2dcf9a824eb5","Type":"ContainerDied","Data":"cff76349eba9fb71cb1abacf68efd6a091f952d14aea1284e55f76e2a2d32af7"} Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.946093 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff76349eba9fb71cb1abacf68efd6a091f952d14aea1284e55f76e2a2d32af7" Feb 18 19:46:55 crc kubenswrapper[4754]: I0218 19:46:55.946098 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ttwdx" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.034207 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99"] Feb 18 19:46:56 crc kubenswrapper[4754]: E0218 19:46:56.034711 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3689e2-a4de-4632-b52b-9af35b45af82" containerName="collect-profiles" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.034739 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3689e2-a4de-4632-b52b-9af35b45af82" containerName="collect-profiles" Feb 18 19:46:56 crc kubenswrapper[4754]: E0218 19:46:56.034774 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.034782 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.035022 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3689e2-a4de-4632-b52b-9af35b45af82" containerName="collect-profiles" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.035057 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a8bff3-a589-4b74-aa9a-2dcf9a824eb5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.035946 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.041549 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.041608 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.041720 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.042034 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.047631 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99"] Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.151762 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.151838 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.151950 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbbz\" (UniqueName: \"kubernetes.io/projected/56759c04-95ec-4e13-8a41-1d1a32258d4c-kube-api-access-6rbbz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.253742 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.253812 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.253932 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbbz\" (UniqueName: \"kubernetes.io/projected/56759c04-95ec-4e13-8a41-1d1a32258d4c-kube-api-access-6rbbz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.258748 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.258801 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.287948 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbbz\" (UniqueName: \"kubernetes.io/projected/56759c04-95ec-4e13-8a41-1d1a32258d4c-kube-api-access-6rbbz\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-lxz99\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.359018 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.886834 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99"] Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.888401 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:46:56 crc kubenswrapper[4754]: I0218 19:46:56.958345 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" event={"ID":"56759c04-95ec-4e13-8a41-1d1a32258d4c","Type":"ContainerStarted","Data":"7c7ef3190a79ccb4b4f0ec2ce99c309702cfc0ff9aba5087a04a34f361e55c2f"} Feb 18 19:46:57 crc kubenswrapper[4754]: I0218 19:46:57.971982 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" event={"ID":"56759c04-95ec-4e13-8a41-1d1a32258d4c","Type":"ContainerStarted","Data":"78a84962d726e9a755c0f4f5e26491dc37bbeedf1f706a6ee295fad8e5407f14"} Feb 18 19:46:58 crc kubenswrapper[4754]: I0218 19:46:58.014386 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" podStartSLOduration=1.536829032 podStartE2EDuration="2.014355883s" podCreationTimestamp="2026-02-18 19:46:56 +0000 UTC" firstStartedPulling="2026-02-18 19:46:56.888076526 +0000 UTC m=+1719.338489322" lastFinishedPulling="2026-02-18 19:46:57.365603327 +0000 UTC m=+1719.816016173" observedRunningTime="2026-02-18 19:46:57.996621153 +0000 UTC m=+1720.447033959" watchObservedRunningTime="2026-02-18 19:46:58.014355883 +0000 UTC m=+1720.464768709" Feb 18 19:47:02 crc kubenswrapper[4754]: I0218 19:47:02.046892 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-snqpk"] Feb 18 19:47:02 crc kubenswrapper[4754]: I0218 19:47:02.058834 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-snqpk"] Feb 18 19:47:02 crc kubenswrapper[4754]: I0218 19:47:02.222266 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4" path="/var/lib/kubelet/pods/dcdb2f7d-6cc1-4e32-a98e-9f1c1ce03ef4/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.064020 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9bd7-account-create-update-hzjql"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.076216 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dqt2f"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.090654 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9bc3-account-create-update-hrjdd"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.121344 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8883-account-create-update-zdkht"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.129639 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-2262-account-create-update-45hjg"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.137842 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nkms6"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.146437 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9bd7-account-create-update-hzjql"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.154479 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nkms6"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.162759 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-2262-account-create-update-45hjg"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.172284 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8883-account-create-update-zdkht"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.182580 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dqt2f"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.196269 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-7l2r8"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.203811 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9bc3-account-create-update-hrjdd"] Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.223423 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042ec2fb-f4b7-4310-a6eb-6f71c8e440c7" path="/var/lib/kubelet/pods/042ec2fb-f4b7-4310-a6eb-6f71c8e440c7/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.223959 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37bb31ad-8178-4924-ac57-6b8325e3cafa" path="/var/lib/kubelet/pods/37bb31ad-8178-4924-ac57-6b8325e3cafa/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.224616 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c661122-825e-4bfd-bdbb-b89b44361abb" path="/var/lib/kubelet/pods/4c661122-825e-4bfd-bdbb-b89b44361abb/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.225129 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5616676f-6b15-4f24-aa3f-5d88ad180239" path="/var/lib/kubelet/pods/5616676f-6b15-4f24-aa3f-5d88ad180239/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.226126 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0e6457-f44a-4059-a466-328fde68deaa" path="/var/lib/kubelet/pods/dd0e6457-f44a-4059-a466-328fde68deaa/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.226663 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedaac0d-ebad-497d-9b8c-b6ee470782f2" path="/var/lib/kubelet/pods/dedaac0d-ebad-497d-9b8c-b6ee470782f2/volumes" Feb 18 19:47:04 crc kubenswrapper[4754]: I0218 19:47:04.227228 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-7l2r8"] Feb 18 19:47:06 crc kubenswrapper[4754]: I0218 19:47:06.223656 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abef9d48-efe9-4491-96cd-a1cd94fecfe1" path="/var/lib/kubelet/pods/abef9d48-efe9-4491-96cd-a1cd94fecfe1/volumes" Feb 18 19:47:07 crc kubenswrapper[4754]: I0218 19:47:07.212250 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:47:07 crc kubenswrapper[4754]: E0218 19:47:07.212820 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:47:15 crc kubenswrapper[4754]: I0218 19:47:15.033452 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4zlg5"] Feb 18 19:47:15 crc kubenswrapper[4754]: I0218 19:47:15.043924 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4zlg5"] Feb 18 19:47:16 crc kubenswrapper[4754]: I0218 19:47:16.226352 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0899bd7b-2289-4496-8521-c8dbea7874f7" path="/var/lib/kubelet/pods/0899bd7b-2289-4496-8521-c8dbea7874f7/volumes" Feb 18 19:47:18 crc kubenswrapper[4754]: I0218 19:47:18.218695 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:47:18 crc kubenswrapper[4754]: E0218 19:47:18.219271 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:47:30 crc kubenswrapper[4754]: I0218 19:47:30.057342 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a469-account-create-update-trbvp"] Feb 18 19:47:30 crc kubenswrapper[4754]: I0218 19:47:30.070867 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a469-account-create-update-trbvp"] Feb 18 19:47:30 crc kubenswrapper[4754]: I0218 19:47:30.251548 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa838e2c-4d5f-4820-ae94-27d460ee1664" path="/var/lib/kubelet/pods/fa838e2c-4d5f-4820-ae94-27d460ee1664/volumes" Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.047082 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c557-account-create-update-qxlgj"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.058430 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a073-account-create-update-wmjfz"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.068752 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-z7kq5"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.079354 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kw8rh"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.088441 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-lxzvw"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.096644 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a073-account-create-update-wmjfz"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.104700 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c557-account-create-update-qxlgj"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.114377 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-lxzvw"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.122607 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kw8rh"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.129914 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-z7kq5"] Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.224185 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1" path="/var/lib/kubelet/pods/1d6d4a9b-75c2-4d59-b3ff-f93adc3c19e1/volumes" Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.225426 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de98594-db65-4550-bd48-dddb366bc4de" path="/var/lib/kubelet/pods/4de98594-db65-4550-bd48-dddb366bc4de/volumes" Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.226581 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ce92b2-1e49-4847-a38e-7322e4089b05" path="/var/lib/kubelet/pods/d3ce92b2-1e49-4847-a38e-7322e4089b05/volumes" Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.227943 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62a77df-9678-4173-bd87-a3451220eb34" path="/var/lib/kubelet/pods/e62a77df-9678-4173-bd87-a3451220eb34/volumes" Feb 18 19:47:32 crc kubenswrapper[4754]: I0218 19:47:32.230366 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe7d1a77-a63f-475f-9ffb-8ce51ab1689d" path="/var/lib/kubelet/pods/fe7d1a77-a63f-475f-9ffb-8ce51ab1689d/volumes" Feb 18 19:47:33 crc kubenswrapper[4754]: I0218 19:47:33.209508 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:47:33 crc kubenswrapper[4754]: E0218 19:47:33.209812 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:47:34 crc kubenswrapper[4754]: I0218 19:47:34.934640 4754 scope.go:117] "RemoveContainer" containerID="d980310407ce084ecb2e43b4d50a473957d8610e0cc11649fb05f847ddce83c7" Feb 18 19:47:34 crc kubenswrapper[4754]: I0218 19:47:34.969427 4754 scope.go:117] "RemoveContainer" containerID="3db40c649da737a2917b3da1b1fc3ffc0740b3e3b333ce052a8bb87c3775a01c" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.033576 4754 scope.go:117] "RemoveContainer" containerID="8362313fdf35e4cd5d50bc3e5125fb41f7f6f3d6ecfac1e64f9416ec40c08f61" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.083110 4754 scope.go:117] "RemoveContainer" containerID="d5482053b9ffc15608039f82b5c85e977bcab1b84011872ddb54ebfdda6a02af" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.126516 4754 scope.go:117] "RemoveContainer" containerID="f0ae20e56636f2f58e170665eb94703c89ec3b120c6e990c3a35db22f4d431ce" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.175373 4754 scope.go:117] "RemoveContainer" containerID="79052e4ab08825dd4c8f4c4b65e9e53cca6a8cd1a355877764ee9c4a80d26a87" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.226248 4754 scope.go:117] "RemoveContainer" containerID="27a37f9793bd652c507bc5cce74d2f502dadde1a98e33cf67faedd64c96441d8" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.263685 4754 scope.go:117] "RemoveContainer" containerID="4a2908a06b8204acf4446d82dad166dcabdc1f7797d7dc75136db58ef2e16ccc" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.313414 4754 scope.go:117] "RemoveContainer" containerID="863fb4218b0293b802ec35dd63a1f84a5251700d09e4c0ab73e51302f9873420" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.333607 4754 scope.go:117] "RemoveContainer" containerID="d0e224b8c48c819419055bd2162941d41d9355b9d49e6d30e1b2c161a7b2b00c" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.352334 4754 scope.go:117] "RemoveContainer" containerID="11b472fe52c282b49235c45ccd0db4e5e7237572c828da8a2f112fbd07e93737" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.386455 4754 scope.go:117] "RemoveContainer" containerID="86659376ef5c511429490cbc5db87d2db3d4570426e400072487fe2f3a5219ee" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.413032 4754 scope.go:117] "RemoveContainer" containerID="28f799587f9bb06f73422208d8a51d499fee19b45d1b07447a1ae47de06dbc43" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.432044 4754 scope.go:117] "RemoveContainer" containerID="7739c4f42187fa9818c14c2865f659911e40ca4500f6382acbb988497d975e86" Feb 18 19:47:35 crc kubenswrapper[4754]: I0218 19:47:35.450849 4754 scope.go:117] "RemoveContainer" containerID="327a1b3c02eb0f9e90847177221fa1219025fe1b9e107dc0085f0b09c2cc96f1" Feb 18 19:47:47 crc kubenswrapper[4754]: I0218 19:47:47.210555 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:47:47 crc kubenswrapper[4754]: E0218 19:47:47.211666 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:47:59 crc kubenswrapper[4754]: I0218 19:47:59.056097 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tkp99"] Feb 18 19:47:59 crc kubenswrapper[4754]: I0218 19:47:59.070864 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tkp99"] Feb 18 19:48:00 crc kubenswrapper[4754]: I0218 19:48:00.229671 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef3a989f-f82e-4062-8b49-3f4cb7959b73" path="/var/lib/kubelet/pods/ef3a989f-f82e-4062-8b49-3f4cb7959b73/volumes" Feb 18 19:48:02 crc kubenswrapper[4754]: I0218 19:48:02.210248 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:48:02 crc kubenswrapper[4754]: E0218 19:48:02.210698 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:48:04 crc kubenswrapper[4754]: I0218 19:48:04.058044 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-f6s4l"] Feb 18 19:48:04 crc kubenswrapper[4754]: I0218 19:48:04.077665 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-f6s4l"] Feb 18 19:48:04 crc kubenswrapper[4754]: I0218 19:48:04.222501 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db8affb-2742-46e4-a19d-a907e5c6d28d" path="/var/lib/kubelet/pods/0db8affb-2742-46e4-a19d-a907e5c6d28d/volumes" Feb 18 19:48:13 crc kubenswrapper[4754]: I0218 19:48:13.209631 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:48:13 crc kubenswrapper[4754]: I0218 19:48:13.804432 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"d9f2023f02567cdf6089106e2f4a1b2d50f661e61a8c391b007983e0df2635db"} Feb 18 19:48:22 crc kubenswrapper[4754]: I0218 19:48:22.911073 4754 generic.go:334] "Generic (PLEG): container finished" podID="56759c04-95ec-4e13-8a41-1d1a32258d4c" containerID="78a84962d726e9a755c0f4f5e26491dc37bbeedf1f706a6ee295fad8e5407f14" exitCode=0 Feb 18 19:48:22 crc kubenswrapper[4754]: I0218 19:48:22.911841 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" event={"ID":"56759c04-95ec-4e13-8a41-1d1a32258d4c","Type":"ContainerDied","Data":"78a84962d726e9a755c0f4f5e26491dc37bbeedf1f706a6ee295fad8e5407f14"} Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.339812 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.535619 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-ssh-key-openstack-edpm-ipam\") pod \"56759c04-95ec-4e13-8a41-1d1a32258d4c\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.536900 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbbz\" (UniqueName: \"kubernetes.io/projected/56759c04-95ec-4e13-8a41-1d1a32258d4c-kube-api-access-6rbbz\") pod \"56759c04-95ec-4e13-8a41-1d1a32258d4c\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.536934 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-inventory\") pod \"56759c04-95ec-4e13-8a41-1d1a32258d4c\" (UID: \"56759c04-95ec-4e13-8a41-1d1a32258d4c\") " Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.541891 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56759c04-95ec-4e13-8a41-1d1a32258d4c-kube-api-access-6rbbz" (OuterVolumeSpecName: "kube-api-access-6rbbz") pod "56759c04-95ec-4e13-8a41-1d1a32258d4c" (UID: "56759c04-95ec-4e13-8a41-1d1a32258d4c"). InnerVolumeSpecName "kube-api-access-6rbbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.565451 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-inventory" (OuterVolumeSpecName: "inventory") pod "56759c04-95ec-4e13-8a41-1d1a32258d4c" (UID: "56759c04-95ec-4e13-8a41-1d1a32258d4c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.589661 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56759c04-95ec-4e13-8a41-1d1a32258d4c" (UID: "56759c04-95ec-4e13-8a41-1d1a32258d4c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.639666 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.639696 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbbz\" (UniqueName: \"kubernetes.io/projected/56759c04-95ec-4e13-8a41-1d1a32258d4c-kube-api-access-6rbbz\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.639706 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56759c04-95ec-4e13-8a41-1d1a32258d4c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.934610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" event={"ID":"56759c04-95ec-4e13-8a41-1d1a32258d4c","Type":"ContainerDied","Data":"7c7ef3190a79ccb4b4f0ec2ce99c309702cfc0ff9aba5087a04a34f361e55c2f"} Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.934665 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7ef3190a79ccb4b4f0ec2ce99c309702cfc0ff9aba5087a04a34f361e55c2f" Feb 18 19:48:24 crc kubenswrapper[4754]: I0218 19:48:24.934674 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-lxz99" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.034007 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4"] Feb 18 19:48:25 crc kubenswrapper[4754]: E0218 19:48:25.034564 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56759c04-95ec-4e13-8a41-1d1a32258d4c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.034588 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="56759c04-95ec-4e13-8a41-1d1a32258d4c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.034840 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="56759c04-95ec-4e13-8a41-1d1a32258d4c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.035673 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.045759 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.046022 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.046229 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.046844 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8pq\" (UniqueName: \"kubernetes.io/projected/05421647-7011-46b0-8597-49ba8458a4dd-kube-api-access-gl8pq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.047038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.047166 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.047292 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.047541 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4"] Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.148326 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8pq\" (UniqueName: \"kubernetes.io/projected/05421647-7011-46b0-8597-49ba8458a4dd-kube-api-access-gl8pq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.148419 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.148463 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.153015 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.156521 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.168389 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8pq\" (UniqueName: \"kubernetes.io/projected/05421647-7011-46b0-8597-49ba8458a4dd-kube-api-access-gl8pq\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.360863 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.687035 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4"] Feb 18 19:48:25 crc kubenswrapper[4754]: I0218 19:48:25.945047 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" event={"ID":"05421647-7011-46b0-8597-49ba8458a4dd","Type":"ContainerStarted","Data":"848267ba0bbf825ada705d83f85de445727717ded86770205aa24bb7bfcd2157"} Feb 18 19:48:26 crc kubenswrapper[4754]: I0218 19:48:26.960507 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" event={"ID":"05421647-7011-46b0-8597-49ba8458a4dd","Type":"ContainerStarted","Data":"f9b3e38dacf378f7b9b8f0c9b6f90d60176124c9df5636fb13c8fbe268a1a475"} Feb 18 19:48:26 crc kubenswrapper[4754]: I0218 19:48:26.986700 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" podStartSLOduration=1.558606726 podStartE2EDuration="1.986678583s" podCreationTimestamp="2026-02-18 19:48:25 +0000 UTC" firstStartedPulling="2026-02-18 19:48:25.697791558 +0000 UTC m=+1808.148204344" lastFinishedPulling="2026-02-18 19:48:26.125863405 +0000 UTC m=+1808.576276201" observedRunningTime="2026-02-18 19:48:26.981600055 +0000 UTC m=+1809.432012851" watchObservedRunningTime="2026-02-18 19:48:26.986678583 +0000 UTC m=+1809.437091379" Feb 18 19:48:30 crc kubenswrapper[4754]: I0218 19:48:30.044016 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-wl8ql"] Feb 18 19:48:30 crc kubenswrapper[4754]: I0218 19:48:30.055308 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-wl8ql"] Feb 18 19:48:30 crc kubenswrapper[4754]: I0218 19:48:30.224877 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d046b6fd-1000-4f80-af20-d756adbab2ea" path="/var/lib/kubelet/pods/d046b6fd-1000-4f80-af20-d756adbab2ea/volumes" Feb 18 19:48:31 crc kubenswrapper[4754]: I0218 19:48:31.032026 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kkj6c"] Feb 18 19:48:31 crc kubenswrapper[4754]: I0218 19:48:31.040671 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kkj6c"] Feb 18 19:48:32 crc kubenswrapper[4754]: I0218 19:48:32.220003 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1abaf62-0594-4378-bea6-b5dc29d52241" path="/var/lib/kubelet/pods/b1abaf62-0594-4378-bea6-b5dc29d52241/volumes" Feb 18 19:48:35 crc kubenswrapper[4754]: I0218 19:48:35.700646 4754 scope.go:117] "RemoveContainer" containerID="6f854c5a231e9496f68db3f3102285b6259db1e80c055dccb160908751e3a010" Feb 18 19:48:35 crc kubenswrapper[4754]: I0218 19:48:35.746454 4754 scope.go:117] "RemoveContainer" containerID="bad3e277c32e6cce4459c94964e348ee6c0a455e24f875b0f6d39ac216f4b610" Feb 18 19:48:35 crc kubenswrapper[4754]: I0218 19:48:35.811603 4754 scope.go:117] "RemoveContainer" containerID="77b3ec840318ae3f7f6da44e6af936dbf6c3c77bf92761bfba1f39e794f8a3a2" Feb 18 19:48:35 crc kubenswrapper[4754]: I0218 19:48:35.850837 4754 scope.go:117] "RemoveContainer" containerID="7b975af1f6f66177a58d1deec670fbf1439aadfbcb2e00637d32275d7dd3dd0b" Feb 18 19:48:49 crc kubenswrapper[4754]: I0218 19:48:49.040463 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5d7g9"] Feb 18 19:48:49 crc kubenswrapper[4754]: I0218 19:48:49.048589 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kd677"] Feb 18 19:48:49 crc kubenswrapper[4754]: I0218 19:48:49.056555 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5d7g9"] Feb 18 19:48:49 crc kubenswrapper[4754]: I0218 19:48:49.063760 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kd677"] Feb 18 19:48:50 crc kubenswrapper[4754]: I0218 19:48:50.036035 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-b2mr5"] Feb 18 19:48:50 crc kubenswrapper[4754]: I0218 19:48:50.047745 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-b2mr5"] Feb 18 19:48:50 crc kubenswrapper[4754]: I0218 19:48:50.228549 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5747d187-87f8-4baa-b0aa-65916db69601" path="/var/lib/kubelet/pods/5747d187-87f8-4baa-b0aa-65916db69601/volumes" Feb 18 19:48:50 crc kubenswrapper[4754]: I0218 19:48:50.229767 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a109a6c-ffaa-479e-95e6-ef033aec4b27" path="/var/lib/kubelet/pods/9a109a6c-ffaa-479e-95e6-ef033aec4b27/volumes" Feb 18 19:48:50 crc kubenswrapper[4754]: I0218 19:48:50.230709 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd89f97-ca15-432f-9ba7-6ce957c1bfa8" path="/var/lib/kubelet/pods/bbd89f97-ca15-432f-9ba7-6ce957c1bfa8/volumes" Feb 18 19:49:04 crc kubenswrapper[4754]: I0218 19:49:04.044368 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-79xk6"] Feb 18 19:49:04 crc kubenswrapper[4754]: I0218 19:49:04.056281 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-79xk6"] Feb 18 19:49:04 crc kubenswrapper[4754]: I0218 19:49:04.223582 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc061809-61de-4d52-909b-e2d4957dc4a4" path="/var/lib/kubelet/pods/fc061809-61de-4d52-909b-e2d4957dc4a4/volumes" Feb 18 19:49:29 crc kubenswrapper[4754]: I0218 19:49:29.795757 4754 generic.go:334] "Generic (PLEG): container finished" podID="05421647-7011-46b0-8597-49ba8458a4dd" containerID="f9b3e38dacf378f7b9b8f0c9b6f90d60176124c9df5636fb13c8fbe268a1a475" exitCode=0 Feb 18 19:49:29 crc kubenswrapper[4754]: I0218 19:49:29.795841 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" event={"ID":"05421647-7011-46b0-8597-49ba8458a4dd","Type":"ContainerDied","Data":"f9b3e38dacf378f7b9b8f0c9b6f90d60176124c9df5636fb13c8fbe268a1a475"} Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.219606 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.359314 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-inventory\") pod \"05421647-7011-46b0-8597-49ba8458a4dd\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.359595 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-ssh-key-openstack-edpm-ipam\") pod \"05421647-7011-46b0-8597-49ba8458a4dd\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.359685 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8pq\" (UniqueName: \"kubernetes.io/projected/05421647-7011-46b0-8597-49ba8458a4dd-kube-api-access-gl8pq\") pod \"05421647-7011-46b0-8597-49ba8458a4dd\" (UID: \"05421647-7011-46b0-8597-49ba8458a4dd\") " Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.365620 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05421647-7011-46b0-8597-49ba8458a4dd-kube-api-access-gl8pq" (OuterVolumeSpecName: "kube-api-access-gl8pq") pod "05421647-7011-46b0-8597-49ba8458a4dd" (UID: "05421647-7011-46b0-8597-49ba8458a4dd"). InnerVolumeSpecName "kube-api-access-gl8pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.386018 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05421647-7011-46b0-8597-49ba8458a4dd" (UID: "05421647-7011-46b0-8597-49ba8458a4dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.392767 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-inventory" (OuterVolumeSpecName: "inventory") pod "05421647-7011-46b0-8597-49ba8458a4dd" (UID: "05421647-7011-46b0-8597-49ba8458a4dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.463178 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.463209 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl8pq\" (UniqueName: \"kubernetes.io/projected/05421647-7011-46b0-8597-49ba8458a4dd-kube-api-access-gl8pq\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.463218 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05421647-7011-46b0-8597-49ba8458a4dd-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.820978 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" event={"ID":"05421647-7011-46b0-8597-49ba8458a4dd","Type":"ContainerDied","Data":"848267ba0bbf825ada705d83f85de445727717ded86770205aa24bb7bfcd2157"} Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.821285 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848267ba0bbf825ada705d83f85de445727717ded86770205aa24bb7bfcd2157" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.821072 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zdzg4" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.904873 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp"] Feb 18 19:49:31 crc kubenswrapper[4754]: E0218 19:49:31.905339 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05421647-7011-46b0-8597-49ba8458a4dd" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.905362 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="05421647-7011-46b0-8597-49ba8458a4dd" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.905581 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="05421647-7011-46b0-8597-49ba8458a4dd" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.906264 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.908026 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.908308 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.911615 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.911779 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:49:31 crc kubenswrapper[4754]: I0218 19:49:31.917259 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp"] Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.074613 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.074753 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.074865 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqg78\" (UniqueName: \"kubernetes.io/projected/e820f1ed-792c-4664-8523-f89fe3eac90a-kube-api-access-pqg78\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.176737 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqg78\" (UniqueName: \"kubernetes.io/projected/e820f1ed-792c-4664-8523-f89fe3eac90a-kube-api-access-pqg78\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.176867 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.176962 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.181583 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.184206 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.193014 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqg78\" (UniqueName: \"kubernetes.io/projected/e820f1ed-792c-4664-8523-f89fe3eac90a-kube-api-access-pqg78\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-skczp\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.220941 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.781253 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp"] Feb 18 19:49:32 crc kubenswrapper[4754]: W0218 19:49:32.799342 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode820f1ed_792c_4664_8523_f89fe3eac90a.slice/crio-0f78c47da5eaa83a88d12c73f5a4c49f4db5c6f4e76b248338526243761eac82 WatchSource:0}: Error finding container 0f78c47da5eaa83a88d12c73f5a4c49f4db5c6f4e76b248338526243761eac82: Status 404 returned error can't find the container with id 0f78c47da5eaa83a88d12c73f5a4c49f4db5c6f4e76b248338526243761eac82 Feb 18 19:49:32 crc kubenswrapper[4754]: I0218 19:49:32.831688 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" event={"ID":"e820f1ed-792c-4664-8523-f89fe3eac90a","Type":"ContainerStarted","Data":"0f78c47da5eaa83a88d12c73f5a4c49f4db5c6f4e76b248338526243761eac82"} Feb 18 19:49:33 crc kubenswrapper[4754]: I0218 19:49:33.847580 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" event={"ID":"e820f1ed-792c-4664-8523-f89fe3eac90a","Type":"ContainerStarted","Data":"1f5493c28e80cdeec3d184fa8222ecf4f962aeca6903f78e52f65af00bdfa9eb"} Feb 18 19:49:33 crc kubenswrapper[4754]: I0218 19:49:33.870275 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" podStartSLOduration=2.328755746 podStartE2EDuration="2.870257335s" podCreationTimestamp="2026-02-18 19:49:31 +0000 UTC" firstStartedPulling="2026-02-18 19:49:32.802000346 +0000 UTC m=+1875.252413132" lastFinishedPulling="2026-02-18 19:49:33.343501925 +0000 UTC m=+1875.793914721" observedRunningTime="2026-02-18 19:49:33.867096227 +0000 UTC m=+1876.317509023" watchObservedRunningTime="2026-02-18 19:49:33.870257335 +0000 UTC m=+1876.320670131" Feb 18 19:49:35 crc kubenswrapper[4754]: I0218 19:49:35.992821 4754 scope.go:117] "RemoveContainer" containerID="0f823c1d6c10e70ca2f31930414e3d92eb69e294e26d5e6a99fe870f79e3e84f" Feb 18 19:49:36 crc kubenswrapper[4754]: I0218 19:49:36.026063 4754 scope.go:117] "RemoveContainer" containerID="0e30b92ebed4bb6fd05b7a710c9994701993088f47724c296dbc81d9da47cefe" Feb 18 19:49:36 crc kubenswrapper[4754]: I0218 19:49:36.084837 4754 scope.go:117] "RemoveContainer" containerID="8fd9dd1c0437f50a9084fe38065e878cde188e0c9b5fb708c30f0cdc4df56daa" Feb 18 19:49:36 crc kubenswrapper[4754]: I0218 19:49:36.119723 4754 scope.go:117] "RemoveContainer" containerID="da88686112b9c3b27980150364cdc419b2bf6726c744b7995d433fb4e9fe0626" Feb 18 19:49:37 crc kubenswrapper[4754]: I0218 19:49:37.885461 4754 generic.go:334] "Generic (PLEG): container finished" podID="e820f1ed-792c-4664-8523-f89fe3eac90a" containerID="1f5493c28e80cdeec3d184fa8222ecf4f962aeca6903f78e52f65af00bdfa9eb" exitCode=0 Feb 18 19:49:37 crc kubenswrapper[4754]: I0218 19:49:37.885564 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" event={"ID":"e820f1ed-792c-4664-8523-f89fe3eac90a","Type":"ContainerDied","Data":"1f5493c28e80cdeec3d184fa8222ecf4f962aeca6903f78e52f65af00bdfa9eb"} Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.316385 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.452012 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqg78\" (UniqueName: \"kubernetes.io/projected/e820f1ed-792c-4664-8523-f89fe3eac90a-kube-api-access-pqg78\") pod \"e820f1ed-792c-4664-8523-f89fe3eac90a\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.452303 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-ssh-key-openstack-edpm-ipam\") pod \"e820f1ed-792c-4664-8523-f89fe3eac90a\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.452393 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-inventory\") pod \"e820f1ed-792c-4664-8523-f89fe3eac90a\" (UID: \"e820f1ed-792c-4664-8523-f89fe3eac90a\") " Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.460664 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e820f1ed-792c-4664-8523-f89fe3eac90a-kube-api-access-pqg78" (OuterVolumeSpecName: "kube-api-access-pqg78") pod "e820f1ed-792c-4664-8523-f89fe3eac90a" (UID: "e820f1ed-792c-4664-8523-f89fe3eac90a"). InnerVolumeSpecName "kube-api-access-pqg78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.478129 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-inventory" (OuterVolumeSpecName: "inventory") pod "e820f1ed-792c-4664-8523-f89fe3eac90a" (UID: "e820f1ed-792c-4664-8523-f89fe3eac90a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.479398 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e820f1ed-792c-4664-8523-f89fe3eac90a" (UID: "e820f1ed-792c-4664-8523-f89fe3eac90a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.555595 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqg78\" (UniqueName: \"kubernetes.io/projected/e820f1ed-792c-4664-8523-f89fe3eac90a-kube-api-access-pqg78\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.555712 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.555731 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e820f1ed-792c-4664-8523-f89fe3eac90a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.910935 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" event={"ID":"e820f1ed-792c-4664-8523-f89fe3eac90a","Type":"ContainerDied","Data":"0f78c47da5eaa83a88d12c73f5a4c49f4db5c6f4e76b248338526243761eac82"} Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.910978 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f78c47da5eaa83a88d12c73f5a4c49f4db5c6f4e76b248338526243761eac82" Feb 18 19:49:39 crc kubenswrapper[4754]: I0218 19:49:39.911042 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-skczp" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.008512 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr"] Feb 18 19:49:40 crc kubenswrapper[4754]: E0218 19:49:40.009123 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e820f1ed-792c-4664-8523-f89fe3eac90a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.009190 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e820f1ed-792c-4664-8523-f89fe3eac90a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.009725 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e820f1ed-792c-4664-8523-f89fe3eac90a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.011049 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.016797 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.017015 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.017454 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.017759 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.024911 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr"] Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.167672 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.167839 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.167892 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4c4j\" (UniqueName: \"kubernetes.io/projected/35e0d4f0-09e1-4ecc-b243-2af083b01e07-kube-api-access-k4c4j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.269894 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.270031 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.270075 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4c4j\" (UniqueName: \"kubernetes.io/projected/35e0d4f0-09e1-4ecc-b243-2af083b01e07-kube-api-access-k4c4j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.274947 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.276042 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.295069 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4c4j\" (UniqueName: \"kubernetes.io/projected/35e0d4f0-09e1-4ecc-b243-2af083b01e07-kube-api-access-k4c4j\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-sxpxr\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.336472 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.833852 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr"] Feb 18 19:49:40 crc kubenswrapper[4754]: I0218 19:49:40.920188 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" event={"ID":"35e0d4f0-09e1-4ecc-b243-2af083b01e07","Type":"ContainerStarted","Data":"51926836c1686a4de11e601ae7a5bed1a787692ca30d1d698e75819e3f989fb1"} Feb 18 19:49:41 crc kubenswrapper[4754]: I0218 19:49:41.931275 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" event={"ID":"35e0d4f0-09e1-4ecc-b243-2af083b01e07","Type":"ContainerStarted","Data":"880ad9e4b9612359e332c922399acbd22b897f057e56098186e8601c1a27c9b5"} Feb 18 19:49:41 crc kubenswrapper[4754]: I0218 19:49:41.955086 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" podStartSLOduration=2.517145325 podStartE2EDuration="2.955067815s" podCreationTimestamp="2026-02-18 19:49:39 +0000 UTC" firstStartedPulling="2026-02-18 19:49:40.841115166 +0000 UTC m=+1883.291527972" lastFinishedPulling="2026-02-18 19:49:41.279037666 +0000 UTC m=+1883.729450462" observedRunningTime="2026-02-18 19:49:41.949081319 +0000 UTC m=+1884.399494155" watchObservedRunningTime="2026-02-18 19:49:41.955067815 +0000 UTC m=+1884.405480611" Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.046683 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-mxznm"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.069278 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-gsxgh"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.077441 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d6ef-account-create-update-l2xhf"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.088788 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pmshz"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.098704 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d6ef-account-create-update-l2xhf"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.108156 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-gsxgh"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.116529 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pmshz"] Feb 18 19:50:15 crc kubenswrapper[4754]: I0218 19:50:15.126238 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-mxznm"] Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.031999 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dec9-account-create-update-svm4r"] Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.044489 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5aeb-account-create-update-w2g2d"] Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.055103 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dec9-account-create-update-svm4r"] Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.065664 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5aeb-account-create-update-w2g2d"] Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.223442 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050eba1f-23da-4294-8cdf-4fad443211a2" path="/var/lib/kubelet/pods/050eba1f-23da-4294-8cdf-4fad443211a2/volumes" Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.224728 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154b46c9-21a8-42bc-897c-51c2c9691dd1" path="/var/lib/kubelet/pods/154b46c9-21a8-42bc-897c-51c2c9691dd1/volumes" Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.225479 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba06bd2-7464-4e3b-bb9f-cbafc0f44608" path="/var/lib/kubelet/pods/1ba06bd2-7464-4e3b-bb9f-cbafc0f44608/volumes" Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.226518 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4287d345-c068-46eb-a185-ee415ed11ade" path="/var/lib/kubelet/pods/4287d345-c068-46eb-a185-ee415ed11ade/volumes" Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.227836 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7178952f-0bf0-472c-9a3f-0c0794b32590" path="/var/lib/kubelet/pods/7178952f-0bf0-472c-9a3f-0c0794b32590/volumes" Feb 18 19:50:16 crc kubenswrapper[4754]: I0218 19:50:16.228437 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec79017-52f6-47b7-b09f-c6ad2f738d97" path="/var/lib/kubelet/pods/bec79017-52f6-47b7-b09f-c6ad2f738d97/volumes" Feb 18 19:50:20 crc kubenswrapper[4754]: I0218 19:50:20.398294 4754 generic.go:334] "Generic (PLEG): container finished" podID="35e0d4f0-09e1-4ecc-b243-2af083b01e07" containerID="880ad9e4b9612359e332c922399acbd22b897f057e56098186e8601c1a27c9b5" exitCode=0 Feb 18 19:50:20 crc kubenswrapper[4754]: I0218 19:50:20.398378 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" event={"ID":"35e0d4f0-09e1-4ecc-b243-2af083b01e07","Type":"ContainerDied","Data":"880ad9e4b9612359e332c922399acbd22b897f057e56098186e8601c1a27c9b5"} Feb 18 19:50:21 crc kubenswrapper[4754]: I0218 19:50:21.864446 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:50:21 crc kubenswrapper[4754]: I0218 19:50:21.972062 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4c4j\" (UniqueName: \"kubernetes.io/projected/35e0d4f0-09e1-4ecc-b243-2af083b01e07-kube-api-access-k4c4j\") pod \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " Feb 18 19:50:21 crc kubenswrapper[4754]: I0218 19:50:21.972121 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-inventory\") pod \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " Feb 18 19:50:21 crc kubenswrapper[4754]: I0218 19:50:21.972219 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-ssh-key-openstack-edpm-ipam\") pod \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\" (UID: \"35e0d4f0-09e1-4ecc-b243-2af083b01e07\") " Feb 18 19:50:21 crc kubenswrapper[4754]: I0218 19:50:21.977600 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e0d4f0-09e1-4ecc-b243-2af083b01e07-kube-api-access-k4c4j" (OuterVolumeSpecName: "kube-api-access-k4c4j") pod "35e0d4f0-09e1-4ecc-b243-2af083b01e07" (UID: "35e0d4f0-09e1-4ecc-b243-2af083b01e07"). InnerVolumeSpecName "kube-api-access-k4c4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.002200 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-inventory" (OuterVolumeSpecName: "inventory") pod "35e0d4f0-09e1-4ecc-b243-2af083b01e07" (UID: "35e0d4f0-09e1-4ecc-b243-2af083b01e07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.002534 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "35e0d4f0-09e1-4ecc-b243-2af083b01e07" (UID: "35e0d4f0-09e1-4ecc-b243-2af083b01e07"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.074977 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4c4j\" (UniqueName: \"kubernetes.io/projected/35e0d4f0-09e1-4ecc-b243-2af083b01e07-kube-api-access-k4c4j\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.075020 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.075034 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/35e0d4f0-09e1-4ecc-b243-2af083b01e07-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.422462 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" event={"ID":"35e0d4f0-09e1-4ecc-b243-2af083b01e07","Type":"ContainerDied","Data":"51926836c1686a4de11e601ae7a5bed1a787692ca30d1d698e75819e3f989fb1"} Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.422503 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51926836c1686a4de11e601ae7a5bed1a787692ca30d1d698e75819e3f989fb1" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.422554 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-sxpxr" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.532202 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf"] Feb 18 19:50:22 crc kubenswrapper[4754]: E0218 19:50:22.533005 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e0d4f0-09e1-4ecc-b243-2af083b01e07" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.533030 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e0d4f0-09e1-4ecc-b243-2af083b01e07" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.533263 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e0d4f0-09e1-4ecc-b243-2af083b01e07" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.534077 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.536813 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.537968 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.538340 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.538627 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.546828 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf"] Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.584954 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.585004 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426mm\" (UniqueName: \"kubernetes.io/projected/59549790-d804-4e74-a27b-302c1bbd8e44-kube-api-access-426mm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.585036 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.687080 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.687194 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426mm\" (UniqueName: \"kubernetes.io/projected/59549790-d804-4e74-a27b-302c1bbd8e44-kube-api-access-426mm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.687246 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.692504 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.695046 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.714043 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426mm\" (UniqueName: \"kubernetes.io/projected/59549790-d804-4e74-a27b-302c1bbd8e44-kube-api-access-426mm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:22 crc kubenswrapper[4754]: I0218 19:50:22.855948 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:50:23 crc kubenswrapper[4754]: I0218 19:50:23.427372 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf"] Feb 18 19:50:24 crc kubenswrapper[4754]: I0218 19:50:24.443605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" event={"ID":"59549790-d804-4e74-a27b-302c1bbd8e44","Type":"ContainerStarted","Data":"156cea6c6904bbb726aeba6e5ef28527a27534245ca773d270c9543f54513191"} Feb 18 19:50:24 crc kubenswrapper[4754]: I0218 19:50:24.444490 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" event={"ID":"59549790-d804-4e74-a27b-302c1bbd8e44","Type":"ContainerStarted","Data":"4f7c87f363f4cfc6fad1fe448896db86a48fda46cf67dbbb9c52526a0f50fb1b"} Feb 18 19:50:24 crc kubenswrapper[4754]: I0218 19:50:24.465115 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" podStartSLOduration=2.037608293 podStartE2EDuration="2.465095899s" podCreationTimestamp="2026-02-18 19:50:22 +0000 UTC" firstStartedPulling="2026-02-18 19:50:23.430308379 +0000 UTC m=+1925.880721175" lastFinishedPulling="2026-02-18 19:50:23.857795985 +0000 UTC m=+1926.308208781" observedRunningTime="2026-02-18 19:50:24.457846903 +0000 UTC m=+1926.908259739" watchObservedRunningTime="2026-02-18 19:50:24.465095899 +0000 UTC m=+1926.915508685" Feb 18 19:50:36 crc kubenswrapper[4754]: I0218 19:50:36.248270 4754 scope.go:117] "RemoveContainer" containerID="38868d996db43619ee1a628c5127ced8fd6e1d705c8afde1689d6ad3ba18032a" Feb 18 19:50:36 crc kubenswrapper[4754]: I0218 19:50:36.280031 4754 scope.go:117] "RemoveContainer" containerID="669bf70abd2469d0e706fd74b2069e8b88c57ab19c56fa45b5d7b186fd66a7bf" Feb 18 19:50:36 crc kubenswrapper[4754]: I0218 19:50:36.327708 4754 scope.go:117] "RemoveContainer" containerID="09802535f499ec8c8fcdf237c63954a07d837d25c3d20fa6a4e67a571f394775" Feb 18 19:50:36 crc kubenswrapper[4754]: I0218 19:50:36.370635 4754 scope.go:117] "RemoveContainer" containerID="8a25a6648529459b524d55d22e89f2da996df30cb2103454ce944202b4041ace" Feb 18 19:50:36 crc kubenswrapper[4754]: I0218 19:50:36.416114 4754 scope.go:117] "RemoveContainer" containerID="846d33c5dccf93caa4ea82def9127153fdfb66d491a261f7625068799e343d33" Feb 18 19:50:36 crc kubenswrapper[4754]: I0218 19:50:36.461272 4754 scope.go:117] "RemoveContainer" containerID="cef6e2514d9e40174c53b809235e91a5fbf4cbf1ee371e8cabb38bef985c866a" Feb 18 19:50:38 crc kubenswrapper[4754]: I0218 19:50:38.096347 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:50:38 crc kubenswrapper[4754]: I0218 19:50:38.096739 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:50:43 crc kubenswrapper[4754]: I0218 19:50:43.048025 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjgmn"] Feb 18 19:50:43 crc kubenswrapper[4754]: I0218 19:50:43.062271 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjgmn"] Feb 18 19:50:44 crc kubenswrapper[4754]: I0218 19:50:44.224186 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e05b6b-58e2-448b-b109-a8b149061c37" path="/var/lib/kubelet/pods/41e05b6b-58e2-448b-b109-a8b149061c37/volumes" Feb 18 19:50:45 crc kubenswrapper[4754]: I0218 19:50:45.943438 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fzwjz"] Feb 18 19:50:45 crc kubenswrapper[4754]: I0218 19:50:45.946130 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:45 crc kubenswrapper[4754]: I0218 19:50:45.953105 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzwjz"] Feb 18 19:50:45 crc kubenswrapper[4754]: I0218 19:50:45.989353 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-catalog-content\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:45 crc kubenswrapper[4754]: I0218 19:50:45.989744 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-utilities\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:45 crc kubenswrapper[4754]: I0218 19:50:45.989863 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtrd\" (UniqueName: \"kubernetes.io/projected/f60445d5-3887-430e-bc83-fd734a7a5706-kube-api-access-5jtrd\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.091981 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtrd\" (UniqueName: \"kubernetes.io/projected/f60445d5-3887-430e-bc83-fd734a7a5706-kube-api-access-5jtrd\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.092297 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-utilities\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.092464 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-catalog-content\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.092788 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-utilities\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.092819 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-catalog-content\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.111316 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtrd\" (UniqueName: \"kubernetes.io/projected/f60445d5-3887-430e-bc83-fd734a7a5706-kube-api-access-5jtrd\") pod \"certified-operators-fzwjz\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.276115 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:46 crc kubenswrapper[4754]: I0218 19:50:46.769728 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzwjz"] Feb 18 19:50:46 crc kubenswrapper[4754]: W0218 19:50:46.786339 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf60445d5_3887_430e_bc83_fd734a7a5706.slice/crio-4f47c29f8a94adcf6cc0437b99d6cab06e35241561d726900198e6023c297db8 WatchSource:0}: Error finding container 4f47c29f8a94adcf6cc0437b99d6cab06e35241561d726900198e6023c297db8: Status 404 returned error can't find the container with id 4f47c29f8a94adcf6cc0437b99d6cab06e35241561d726900198e6023c297db8 Feb 18 19:50:47 crc kubenswrapper[4754]: I0218 19:50:47.655130 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerStarted","Data":"4f47c29f8a94adcf6cc0437b99d6cab06e35241561d726900198e6023c297db8"} Feb 18 19:50:48 crc kubenswrapper[4754]: I0218 19:50:48.671257 4754 generic.go:334] "Generic (PLEG): container finished" podID="f60445d5-3887-430e-bc83-fd734a7a5706" containerID="5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094" exitCode=0 Feb 18 19:50:48 crc kubenswrapper[4754]: I0218 19:50:48.671428 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerDied","Data":"5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094"} Feb 18 19:50:49 crc kubenswrapper[4754]: I0218 19:50:49.687550 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerStarted","Data":"23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637"} Feb 18 19:50:50 crc kubenswrapper[4754]: I0218 19:50:50.701324 4754 generic.go:334] "Generic (PLEG): container finished" podID="f60445d5-3887-430e-bc83-fd734a7a5706" containerID="23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637" exitCode=0 Feb 18 19:50:50 crc kubenswrapper[4754]: I0218 19:50:50.701422 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerDied","Data":"23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637"} Feb 18 19:50:51 crc kubenswrapper[4754]: I0218 19:50:51.712605 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerStarted","Data":"340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac"} Feb 18 19:50:51 crc kubenswrapper[4754]: I0218 19:50:51.737234 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fzwjz" podStartSLOduration=4.330258254 podStartE2EDuration="6.73721132s" podCreationTimestamp="2026-02-18 19:50:45 +0000 UTC" firstStartedPulling="2026-02-18 19:50:48.681672542 +0000 UTC m=+1951.132085338" lastFinishedPulling="2026-02-18 19:50:51.088625608 +0000 UTC m=+1953.539038404" observedRunningTime="2026-02-18 19:50:51.729648722 +0000 UTC m=+1954.180061538" watchObservedRunningTime="2026-02-18 19:50:51.73721132 +0000 UTC m=+1954.187624126" Feb 18 19:50:56 crc kubenswrapper[4754]: I0218 19:50:56.276465 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:56 crc kubenswrapper[4754]: I0218 19:50:56.277693 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:56 crc kubenswrapper[4754]: I0218 19:50:56.329506 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:56 crc kubenswrapper[4754]: I0218 19:50:56.833709 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:56 crc kubenswrapper[4754]: I0218 19:50:56.882944 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzwjz"] Feb 18 19:50:58 crc kubenswrapper[4754]: I0218 19:50:58.791850 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fzwjz" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="registry-server" containerID="cri-o://340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac" gracePeriod=2 Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.231599 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.390736 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-catalog-content\") pod \"f60445d5-3887-430e-bc83-fd734a7a5706\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.390975 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jtrd\" (UniqueName: \"kubernetes.io/projected/f60445d5-3887-430e-bc83-fd734a7a5706-kube-api-access-5jtrd\") pod \"f60445d5-3887-430e-bc83-fd734a7a5706\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.391041 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-utilities\") pod \"f60445d5-3887-430e-bc83-fd734a7a5706\" (UID: \"f60445d5-3887-430e-bc83-fd734a7a5706\") " Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.391888 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-utilities" (OuterVolumeSpecName: "utilities") pod "f60445d5-3887-430e-bc83-fd734a7a5706" (UID: "f60445d5-3887-430e-bc83-fd734a7a5706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.398993 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60445d5-3887-430e-bc83-fd734a7a5706-kube-api-access-5jtrd" (OuterVolumeSpecName: "kube-api-access-5jtrd") pod "f60445d5-3887-430e-bc83-fd734a7a5706" (UID: "f60445d5-3887-430e-bc83-fd734a7a5706"). InnerVolumeSpecName "kube-api-access-5jtrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.447918 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f60445d5-3887-430e-bc83-fd734a7a5706" (UID: "f60445d5-3887-430e-bc83-fd734a7a5706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.493731 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jtrd\" (UniqueName: \"kubernetes.io/projected/f60445d5-3887-430e-bc83-fd734a7a5706-kube-api-access-5jtrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.493765 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.493777 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f60445d5-3887-430e-bc83-fd734a7a5706-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.803957 4754 generic.go:334] "Generic (PLEG): container finished" podID="f60445d5-3887-430e-bc83-fd734a7a5706" containerID="340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac" exitCode=0 Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.804009 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerDied","Data":"340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac"} Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.804045 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzwjz" event={"ID":"f60445d5-3887-430e-bc83-fd734a7a5706","Type":"ContainerDied","Data":"4f47c29f8a94adcf6cc0437b99d6cab06e35241561d726900198e6023c297db8"} Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.804050 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzwjz" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.804069 4754 scope.go:117] "RemoveContainer" containerID="340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.826548 4754 scope.go:117] "RemoveContainer" containerID="23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.855350 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzwjz"] Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.856330 4754 scope.go:117] "RemoveContainer" containerID="5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.868907 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fzwjz"] Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.899018 4754 scope.go:117] "RemoveContainer" containerID="340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac" Feb 18 19:50:59 crc kubenswrapper[4754]: E0218 19:50:59.899454 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac\": container with ID starting with 340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac not found: ID does not exist" containerID="340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.899486 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac"} err="failed to get container status \"340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac\": rpc error: code = NotFound desc = could not find container \"340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac\": container with ID starting with 340c983597b10f1e655d232e3dd6297ef3f7dda862f108e358655e46c8a7b1ac not found: ID does not exist" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.899507 4754 scope.go:117] "RemoveContainer" containerID="23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637" Feb 18 19:50:59 crc kubenswrapper[4754]: E0218 19:50:59.899815 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637\": container with ID starting with 23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637 not found: ID does not exist" containerID="23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.899860 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637"} err="failed to get container status \"23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637\": rpc error: code = NotFound desc = could not find container \"23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637\": container with ID starting with 23213fcd6a61e5d05ad54bcd7521644885645ba1019d2256ad73d9ccd3506637 not found: ID does not exist" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.899889 4754 scope.go:117] "RemoveContainer" containerID="5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094" Feb 18 19:50:59 crc kubenswrapper[4754]: E0218 19:50:59.900380 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094\": container with ID starting with 5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094 not found: ID does not exist" containerID="5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094" Feb 18 19:50:59 crc kubenswrapper[4754]: I0218 19:50:59.900403 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094"} err="failed to get container status \"5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094\": rpc error: code = NotFound desc = could not find container \"5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094\": container with ID starting with 5a4256803590e2d9456e59f6faf0ae20e0db356f91261ec476cf02d42d228094 not found: ID does not exist" Feb 18 19:51:00 crc kubenswrapper[4754]: I0218 19:51:00.222267 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" path="/var/lib/kubelet/pods/f60445d5-3887-430e-bc83-fd734a7a5706/volumes" Feb 18 19:51:07 crc kubenswrapper[4754]: I0218 19:51:07.051961 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8n2ll"] Feb 18 19:51:07 crc kubenswrapper[4754]: I0218 19:51:07.063627 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8n2ll"] Feb 18 19:51:08 crc kubenswrapper[4754]: I0218 19:51:08.031698 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m5f2"] Feb 18 19:51:08 crc kubenswrapper[4754]: I0218 19:51:08.042002 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7m5f2"] Feb 18 19:51:08 crc kubenswrapper[4754]: I0218 19:51:08.097212 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:51:08 crc kubenswrapper[4754]: I0218 19:51:08.097288 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:51:08 crc kubenswrapper[4754]: I0218 19:51:08.229237 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0498eea3-d1f4-43dd-82a3-4e98065a9fda" path="/var/lib/kubelet/pods/0498eea3-d1f4-43dd-82a3-4e98065a9fda/volumes" Feb 18 19:51:08 crc kubenswrapper[4754]: I0218 19:51:08.230528 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1" path="/var/lib/kubelet/pods/ed8f2e4f-3d20-4e37-baf5-3d9597bee3d1/volumes" Feb 18 19:51:11 crc kubenswrapper[4754]: I0218 19:51:11.935953 4754 generic.go:334] "Generic (PLEG): container finished" podID="59549790-d804-4e74-a27b-302c1bbd8e44" containerID="156cea6c6904bbb726aeba6e5ef28527a27534245ca773d270c9543f54513191" exitCode=0 Feb 18 19:51:11 crc kubenswrapper[4754]: I0218 19:51:11.936036 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" event={"ID":"59549790-d804-4e74-a27b-302c1bbd8e44","Type":"ContainerDied","Data":"156cea6c6904bbb726aeba6e5ef28527a27534245ca773d270c9543f54513191"} Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.481692 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.667216 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-ssh-key-openstack-edpm-ipam\") pod \"59549790-d804-4e74-a27b-302c1bbd8e44\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.667320 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-inventory\") pod \"59549790-d804-4e74-a27b-302c1bbd8e44\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.667367 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-426mm\" (UniqueName: \"kubernetes.io/projected/59549790-d804-4e74-a27b-302c1bbd8e44-kube-api-access-426mm\") pod \"59549790-d804-4e74-a27b-302c1bbd8e44\" (UID: \"59549790-d804-4e74-a27b-302c1bbd8e44\") " Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.679467 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59549790-d804-4e74-a27b-302c1bbd8e44-kube-api-access-426mm" (OuterVolumeSpecName: "kube-api-access-426mm") pod "59549790-d804-4e74-a27b-302c1bbd8e44" (UID: "59549790-d804-4e74-a27b-302c1bbd8e44"). InnerVolumeSpecName "kube-api-access-426mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.719066 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-inventory" (OuterVolumeSpecName: "inventory") pod "59549790-d804-4e74-a27b-302c1bbd8e44" (UID: "59549790-d804-4e74-a27b-302c1bbd8e44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.720779 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59549790-d804-4e74-a27b-302c1bbd8e44" (UID: "59549790-d804-4e74-a27b-302c1bbd8e44"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.769600 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.769634 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-426mm\" (UniqueName: \"kubernetes.io/projected/59549790-d804-4e74-a27b-302c1bbd8e44-kube-api-access-426mm\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4754]: I0218 19:51:13.769649 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59549790-d804-4e74-a27b-302c1bbd8e44-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.001901 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" event={"ID":"59549790-d804-4e74-a27b-302c1bbd8e44","Type":"ContainerDied","Data":"4f7c87f363f4cfc6fad1fe448896db86a48fda46cf67dbbb9c52526a0f50fb1b"} Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.001968 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7c87f363f4cfc6fad1fe448896db86a48fda46cf67dbbb9c52526a0f50fb1b" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.002079 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6bqxf" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.057400 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-24qzm"] Feb 18 19:51:14 crc kubenswrapper[4754]: E0218 19:51:14.057872 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="registry-server" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.057890 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="registry-server" Feb 18 19:51:14 crc kubenswrapper[4754]: E0218 19:51:14.057929 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="extract-content" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.057935 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="extract-content" Feb 18 19:51:14 crc kubenswrapper[4754]: E0218 19:51:14.057944 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59549790-d804-4e74-a27b-302c1bbd8e44" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.057959 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="59549790-d804-4e74-a27b-302c1bbd8e44" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:14 crc kubenswrapper[4754]: E0218 19:51:14.057967 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="extract-utilities" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.057974 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="extract-utilities" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.058191 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="59549790-d804-4e74-a27b-302c1bbd8e44" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.058216 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60445d5-3887-430e-bc83-fd734a7a5706" containerName="registry-server" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.058887 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.061174 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.061241 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.061470 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.062822 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.075909 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-24qzm"] Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.181534 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.182043 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4s4t\" (UniqueName: \"kubernetes.io/projected/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-kube-api-access-b4s4t\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.182215 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.284087 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.284135 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4s4t\" (UniqueName: \"kubernetes.io/projected/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-kube-api-access-b4s4t\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.284385 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.289076 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.291172 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.304012 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4s4t\" (UniqueName: \"kubernetes.io/projected/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-kube-api-access-b4s4t\") pod \"ssh-known-hosts-edpm-deployment-24qzm\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.378430 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:14 crc kubenswrapper[4754]: I0218 19:51:14.984889 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-24qzm"] Feb 18 19:51:15 crc kubenswrapper[4754]: I0218 19:51:15.015827 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" event={"ID":"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76","Type":"ContainerStarted","Data":"5de55f66583b0e514539f464edc85d6e957772f708492d8d412122633baf0968"} Feb 18 19:51:16 crc kubenswrapper[4754]: I0218 19:51:16.025371 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" event={"ID":"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76","Type":"ContainerStarted","Data":"29bf8d8922a98e4d6b7a605bbc93643211d01d122be4e054bbb4fcd4fd00f9ba"} Feb 18 19:51:16 crc kubenswrapper[4754]: I0218 19:51:16.042801 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" podStartSLOduration=1.651469988 podStartE2EDuration="2.042774143s" podCreationTimestamp="2026-02-18 19:51:14 +0000 UTC" firstStartedPulling="2026-02-18 19:51:14.989830994 +0000 UTC m=+1977.440243790" lastFinishedPulling="2026-02-18 19:51:15.381135149 +0000 UTC m=+1977.831547945" observedRunningTime="2026-02-18 19:51:16.039854291 +0000 UTC m=+1978.490267087" watchObservedRunningTime="2026-02-18 19:51:16.042774143 +0000 UTC m=+1978.493186949" Feb 18 19:51:23 crc kubenswrapper[4754]: I0218 19:51:23.083379 4754 generic.go:334] "Generic (PLEG): container finished" podID="58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" containerID="29bf8d8922a98e4d6b7a605bbc93643211d01d122be4e054bbb4fcd4fd00f9ba" exitCode=0 Feb 18 19:51:23 crc kubenswrapper[4754]: I0218 19:51:23.083509 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" event={"ID":"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76","Type":"ContainerDied","Data":"29bf8d8922a98e4d6b7a605bbc93643211d01d122be4e054bbb4fcd4fd00f9ba"} Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.506421 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.579682 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-ssh-key-openstack-edpm-ipam\") pod \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.579793 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4s4t\" (UniqueName: \"kubernetes.io/projected/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-kube-api-access-b4s4t\") pod \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.579830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-inventory-0\") pod \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\" (UID: \"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76\") " Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.587126 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-kube-api-access-b4s4t" (OuterVolumeSpecName: "kube-api-access-b4s4t") pod "58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" (UID: "58a9f756-d53b-4e64-8fbe-fa5cf22c9d76"). InnerVolumeSpecName "kube-api-access-b4s4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.606781 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" (UID: "58a9f756-d53b-4e64-8fbe-fa5cf22c9d76"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.608639 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" (UID: "58a9f756-d53b-4e64-8fbe-fa5cf22c9d76"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.682498 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4s4t\" (UniqueName: \"kubernetes.io/projected/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-kube-api-access-b4s4t\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.682531 4754 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:24 crc kubenswrapper[4754]: I0218 19:51:24.682543 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/58a9f756-d53b-4e64-8fbe-fa5cf22c9d76-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.101995 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" event={"ID":"58a9f756-d53b-4e64-8fbe-fa5cf22c9d76","Type":"ContainerDied","Data":"5de55f66583b0e514539f464edc85d6e957772f708492d8d412122633baf0968"} Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.102032 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de55f66583b0e514539f464edc85d6e957772f708492d8d412122633baf0968" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.102065 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-24qzm" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.178310 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh"] Feb 18 19:51:25 crc kubenswrapper[4754]: E0218 19:51:25.178965 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.179055 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.179337 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a9f756-d53b-4e64-8fbe-fa5cf22c9d76" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.180039 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.182094 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.182192 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.182354 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.182441 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.195152 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh"] Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.292920 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.293218 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.293295 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmwb\" (UniqueName: \"kubernetes.io/projected/387b2a7f-5d6e-48be-851c-709c312cb682-kube-api-access-9mmwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.395364 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mmwb\" (UniqueName: \"kubernetes.io/projected/387b2a7f-5d6e-48be-851c-709c312cb682-kube-api-access-9mmwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.395549 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.395573 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.399366 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.399553 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.416938 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mmwb\" (UniqueName: \"kubernetes.io/projected/387b2a7f-5d6e-48be-851c-709c312cb682-kube-api-access-9mmwb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-hmcsh\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:25 crc kubenswrapper[4754]: I0218 19:51:25.498198 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:26 crc kubenswrapper[4754]: I0218 19:51:26.057550 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh"] Feb 18 19:51:26 crc kubenswrapper[4754]: I0218 19:51:26.110025 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" event={"ID":"387b2a7f-5d6e-48be-851c-709c312cb682","Type":"ContainerStarted","Data":"88734d243be5b2758cedb8336a02f81a80d0ce14a34bda1e942e849d9c724292"} Feb 18 19:51:27 crc kubenswrapper[4754]: I0218 19:51:27.121857 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" event={"ID":"387b2a7f-5d6e-48be-851c-709c312cb682","Type":"ContainerStarted","Data":"93c9fc4bbf374c8963883fbe3fbbe00a133dbeb45f8b628e947657535555df18"} Feb 18 19:51:27 crc kubenswrapper[4754]: I0218 19:51:27.143482 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" podStartSLOduration=1.692442827 podStartE2EDuration="2.143301011s" podCreationTimestamp="2026-02-18 19:51:25 +0000 UTC" firstStartedPulling="2026-02-18 19:51:26.066982735 +0000 UTC m=+1988.517395571" lastFinishedPulling="2026-02-18 19:51:26.517840959 +0000 UTC m=+1988.968253755" observedRunningTime="2026-02-18 19:51:27.134920426 +0000 UTC m=+1989.585333222" watchObservedRunningTime="2026-02-18 19:51:27.143301011 +0000 UTC m=+1989.593713807" Feb 18 19:51:35 crc kubenswrapper[4754]: I0218 19:51:35.197934 4754 generic.go:334] "Generic (PLEG): container finished" podID="387b2a7f-5d6e-48be-851c-709c312cb682" containerID="93c9fc4bbf374c8963883fbe3fbbe00a133dbeb45f8b628e947657535555df18" exitCode=0 Feb 18 19:51:35 crc kubenswrapper[4754]: I0218 19:51:35.198383 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" event={"ID":"387b2a7f-5d6e-48be-851c-709c312cb682","Type":"ContainerDied","Data":"93c9fc4bbf374c8963883fbe3fbbe00a133dbeb45f8b628e947657535555df18"} Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.600900 4754 scope.go:117] "RemoveContainer" containerID="0be47ccea0a8348510d65e01713426445bfb6ace646e55211e7f98fb8e858138" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.605984 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.643050 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mmwb\" (UniqueName: \"kubernetes.io/projected/387b2a7f-5d6e-48be-851c-709c312cb682-kube-api-access-9mmwb\") pod \"387b2a7f-5d6e-48be-851c-709c312cb682\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.643268 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-ssh-key-openstack-edpm-ipam\") pod \"387b2a7f-5d6e-48be-851c-709c312cb682\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.643378 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-inventory\") pod \"387b2a7f-5d6e-48be-851c-709c312cb682\" (UID: \"387b2a7f-5d6e-48be-851c-709c312cb682\") " Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.649526 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387b2a7f-5d6e-48be-851c-709c312cb682-kube-api-access-9mmwb" (OuterVolumeSpecName: "kube-api-access-9mmwb") pod "387b2a7f-5d6e-48be-851c-709c312cb682" (UID: "387b2a7f-5d6e-48be-851c-709c312cb682"). InnerVolumeSpecName "kube-api-access-9mmwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.658119 4754 scope.go:117] "RemoveContainer" containerID="1bae5f121c7719d510033756925e87bd627c20c0a7ff8198303025c8b0b9a832" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.675909 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-inventory" (OuterVolumeSpecName: "inventory") pod "387b2a7f-5d6e-48be-851c-709c312cb682" (UID: "387b2a7f-5d6e-48be-851c-709c312cb682"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.686157 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "387b2a7f-5d6e-48be-851c-709c312cb682" (UID: "387b2a7f-5d6e-48be-851c-709c312cb682"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.745802 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mmwb\" (UniqueName: \"kubernetes.io/projected/387b2a7f-5d6e-48be-851c-709c312cb682-kube-api-access-9mmwb\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.745830 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.745842 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/387b2a7f-5d6e-48be-851c-709c312cb682-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:36 crc kubenswrapper[4754]: I0218 19:51:36.757580 4754 scope.go:117] "RemoveContainer" containerID="c186b5c3527227838597ba94acfc66a3d49a4c83f60237e792c7cc69688329f0" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.220279 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" event={"ID":"387b2a7f-5d6e-48be-851c-709c312cb682","Type":"ContainerDied","Data":"88734d243be5b2758cedb8336a02f81a80d0ce14a34bda1e942e849d9c724292"} Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.220318 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88734d243be5b2758cedb8336a02f81a80d0ce14a34bda1e942e849d9c724292" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.220356 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-hmcsh" Feb 18 19:51:37 crc kubenswrapper[4754]: E0218 19:51:37.289219 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod387b2a7f_5d6e_48be_851c_709c312cb682.slice/crio-88734d243be5b2758cedb8336a02f81a80d0ce14a34bda1e942e849d9c724292\": RecentStats: unable to find data in memory cache]" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.315441 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc"] Feb 18 19:51:37 crc kubenswrapper[4754]: E0218 19:51:37.316225 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387b2a7f-5d6e-48be-851c-709c312cb682" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.316249 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="387b2a7f-5d6e-48be-851c-709c312cb682" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.316518 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="387b2a7f-5d6e-48be-851c-709c312cb682" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.317319 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.319435 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.321571 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.321748 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.321870 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.335360 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc"] Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.361681 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.361924 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7g6z\" (UniqueName: \"kubernetes.io/projected/c32fe67d-771c-4788-a3a7-166402a4f6f4-kube-api-access-w7g6z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.362282 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.464067 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.464225 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.464375 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7g6z\" (UniqueName: \"kubernetes.io/projected/c32fe67d-771c-4788-a3a7-166402a4f6f4-kube-api-access-w7g6z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.469248 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.469372 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.483615 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7g6z\" (UniqueName: \"kubernetes.io/projected/c32fe67d-771c-4788-a3a7-166402a4f6f4-kube-api-access-w7g6z\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:37 crc kubenswrapper[4754]: I0218 19:51:37.638123 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.096815 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.097052 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.097092 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.097741 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9f2023f02567cdf6089106e2f4a1b2d50f661e61a8c391b007983e0df2635db"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.097807 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://d9f2023f02567cdf6089106e2f4a1b2d50f661e61a8c391b007983e0df2635db" gracePeriod=600 Feb 18 19:51:38 crc kubenswrapper[4754]: W0218 19:51:38.206199 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc32fe67d_771c_4788_a3a7_166402a4f6f4.slice/crio-81a335a0edb87ad312c617c08748aa9eb7cb1283cbd349cbd0326059245eb02e WatchSource:0}: Error finding container 81a335a0edb87ad312c617c08748aa9eb7cb1283cbd349cbd0326059245eb02e: Status 404 returned error can't find the container with id 81a335a0edb87ad312c617c08748aa9eb7cb1283cbd349cbd0326059245eb02e Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.223024 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc"] Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.229629 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" event={"ID":"c32fe67d-771c-4788-a3a7-166402a4f6f4","Type":"ContainerStarted","Data":"81a335a0edb87ad312c617c08748aa9eb7cb1283cbd349cbd0326059245eb02e"} Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.233795 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="d9f2023f02567cdf6089106e2f4a1b2d50f661e61a8c391b007983e0df2635db" exitCode=0 Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.233837 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"d9f2023f02567cdf6089106e2f4a1b2d50f661e61a8c391b007983e0df2635db"} Feb 18 19:51:38 crc kubenswrapper[4754]: I0218 19:51:38.233881 4754 scope.go:117] "RemoveContainer" containerID="9be7cf9359f8fafc840776d7e80d99e2a805428d398fd7fb5c3dab4480780333" Feb 18 19:51:39 crc kubenswrapper[4754]: I0218 19:51:39.242603 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" event={"ID":"c32fe67d-771c-4788-a3a7-166402a4f6f4","Type":"ContainerStarted","Data":"f59ba39068b7a8821d4cc9178ce39aa8c2e206fc1ec5f382515e072670f5a1b2"} Feb 18 19:51:39 crc kubenswrapper[4754]: I0218 19:51:39.246201 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2"} Feb 18 19:51:39 crc kubenswrapper[4754]: I0218 19:51:39.272406 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" podStartSLOduration=1.64475267 podStartE2EDuration="2.27238278s" podCreationTimestamp="2026-02-18 19:51:37 +0000 UTC" firstStartedPulling="2026-02-18 19:51:38.208751164 +0000 UTC m=+2000.659163950" lastFinishedPulling="2026-02-18 19:51:38.836381264 +0000 UTC m=+2001.286794060" observedRunningTime="2026-02-18 19:51:39.263965754 +0000 UTC m=+2001.714378550" watchObservedRunningTime="2026-02-18 19:51:39.27238278 +0000 UTC m=+2001.722795596" Feb 18 19:51:48 crc kubenswrapper[4754]: I0218 19:51:48.351872 4754 generic.go:334] "Generic (PLEG): container finished" podID="c32fe67d-771c-4788-a3a7-166402a4f6f4" containerID="f59ba39068b7a8821d4cc9178ce39aa8c2e206fc1ec5f382515e072670f5a1b2" exitCode=0 Feb 18 19:51:48 crc kubenswrapper[4754]: I0218 19:51:48.352030 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" event={"ID":"c32fe67d-771c-4788-a3a7-166402a4f6f4","Type":"ContainerDied","Data":"f59ba39068b7a8821d4cc9178ce39aa8c2e206fc1ec5f382515e072670f5a1b2"} Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.795284 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.896943 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7g6z\" (UniqueName: \"kubernetes.io/projected/c32fe67d-771c-4788-a3a7-166402a4f6f4-kube-api-access-w7g6z\") pod \"c32fe67d-771c-4788-a3a7-166402a4f6f4\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.896997 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-inventory\") pod \"c32fe67d-771c-4788-a3a7-166402a4f6f4\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.897048 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-ssh-key-openstack-edpm-ipam\") pod \"c32fe67d-771c-4788-a3a7-166402a4f6f4\" (UID: \"c32fe67d-771c-4788-a3a7-166402a4f6f4\") " Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.902590 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32fe67d-771c-4788-a3a7-166402a4f6f4-kube-api-access-w7g6z" (OuterVolumeSpecName: "kube-api-access-w7g6z") pod "c32fe67d-771c-4788-a3a7-166402a4f6f4" (UID: "c32fe67d-771c-4788-a3a7-166402a4f6f4"). InnerVolumeSpecName "kube-api-access-w7g6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.928894 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c32fe67d-771c-4788-a3a7-166402a4f6f4" (UID: "c32fe67d-771c-4788-a3a7-166402a4f6f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.928947 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-inventory" (OuterVolumeSpecName: "inventory") pod "c32fe67d-771c-4788-a3a7-166402a4f6f4" (UID: "c32fe67d-771c-4788-a3a7-166402a4f6f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.999227 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.999264 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7g6z\" (UniqueName: \"kubernetes.io/projected/c32fe67d-771c-4788-a3a7-166402a4f6f4-kube-api-access-w7g6z\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:49 crc kubenswrapper[4754]: I0218 19:51:49.999276 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32fe67d-771c-4788-a3a7-166402a4f6f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.374387 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" event={"ID":"c32fe67d-771c-4788-a3a7-166402a4f6f4","Type":"ContainerDied","Data":"81a335a0edb87ad312c617c08748aa9eb7cb1283cbd349cbd0326059245eb02e"} Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.374861 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a335a0edb87ad312c617c08748aa9eb7cb1283cbd349cbd0326059245eb02e" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.374469 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bxthc" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.501754 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq"] Feb 18 19:51:50 crc kubenswrapper[4754]: E0218 19:51:50.502466 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32fe67d-771c-4788-a3a7-166402a4f6f4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.502527 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32fe67d-771c-4788-a3a7-166402a4f6f4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.502839 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32fe67d-771c-4788-a3a7-166402a4f6f4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.503829 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.508665 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.509034 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.509570 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.509775 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.509965 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510047 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510247 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510356 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510471 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510541 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510582 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510671 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510566 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510778 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510797 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7gvm\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-kube-api-access-z7gvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510823 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510858 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.510865 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.511029 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.511090 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.511426 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.511774 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.517648 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq"] Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.613173 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627322 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627364 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627448 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627536 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627558 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627583 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627617 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627651 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627746 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627763 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7gvm\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-kube-api-access-z7gvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627788 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627832 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.627908 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.618781 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.635974 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.637038 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.638623 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.638669 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.642963 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.653261 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.667100 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.667183 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.667342 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.667669 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.671916 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.672031 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7gvm\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-kube-api-access-z7gvm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.700370 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:50 crc kubenswrapper[4754]: I0218 19:51:50.837980 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:51:51 crc kubenswrapper[4754]: W0218 19:51:51.447000 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf671bbe8_8e7a_4aeb_aed9_ed72dc1df2fc.slice/crio-19664282ffb7ec35a4931d5da6ae74301e013a951425560d36bbe4b9ffe1ed85 WatchSource:0}: Error finding container 19664282ffb7ec35a4931d5da6ae74301e013a951425560d36bbe4b9ffe1ed85: Status 404 returned error can't find the container with id 19664282ffb7ec35a4931d5da6ae74301e013a951425560d36bbe4b9ffe1ed85 Feb 18 19:51:51 crc kubenswrapper[4754]: I0218 19:51:51.452309 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq"] Feb 18 19:51:52 crc kubenswrapper[4754]: I0218 19:51:52.051064 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-x4p8j"] Feb 18 19:51:52 crc kubenswrapper[4754]: I0218 19:51:52.058102 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-x4p8j"] Feb 18 19:51:52 crc kubenswrapper[4754]: I0218 19:51:52.222781 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374cff81-352f-49da-b29e-db1c118cfd37" path="/var/lib/kubelet/pods/374cff81-352f-49da-b29e-db1c118cfd37/volumes" Feb 18 19:51:52 crc kubenswrapper[4754]: I0218 19:51:52.392985 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" event={"ID":"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc","Type":"ContainerStarted","Data":"19664282ffb7ec35a4931d5da6ae74301e013a951425560d36bbe4b9ffe1ed85"} Feb 18 19:51:53 crc kubenswrapper[4754]: I0218 19:51:53.404615 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" event={"ID":"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc","Type":"ContainerStarted","Data":"bec65619d5a640499bfe7a9620607b5bc69a41db13b210c723a99d62d5991aec"} Feb 18 19:51:53 crc kubenswrapper[4754]: I0218 19:51:53.434451 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" podStartSLOduration=2.6075136150000002 podStartE2EDuration="3.434431784s" podCreationTimestamp="2026-02-18 19:51:50 +0000 UTC" firstStartedPulling="2026-02-18 19:51:51.449876814 +0000 UTC m=+2013.900289610" lastFinishedPulling="2026-02-18 19:51:52.276794973 +0000 UTC m=+2014.727207779" observedRunningTime="2026-02-18 19:51:53.424596634 +0000 UTC m=+2015.875009460" watchObservedRunningTime="2026-02-18 19:51:53.434431784 +0000 UTC m=+2015.884844570" Feb 18 19:52:27 crc kubenswrapper[4754]: I0218 19:52:27.734904 4754 generic.go:334] "Generic (PLEG): container finished" podID="f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" containerID="bec65619d5a640499bfe7a9620607b5bc69a41db13b210c723a99d62d5991aec" exitCode=0 Feb 18 19:52:27 crc kubenswrapper[4754]: I0218 19:52:27.735037 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" event={"ID":"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc","Type":"ContainerDied","Data":"bec65619d5a640499bfe7a9620607b5bc69a41db13b210c723a99d62d5991aec"} Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.223616 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.380020 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7gvm\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-kube-api-access-z7gvm\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.380312 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.380421 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-telemetry-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.380506 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-nova-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.380583 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-neutron-metadata-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.380785 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381198 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-libvirt-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381313 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381444 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ssh-key-openstack-edpm-ipam\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381536 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ovn-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381673 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-bootstrap-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381782 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-repo-setup-combined-ca-bundle\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.381889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-inventory\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.382009 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\" (UID: \"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc\") " Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.389750 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.390615 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.390638 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-kube-api-access-z7gvm" (OuterVolumeSpecName: "kube-api-access-z7gvm") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "kube-api-access-z7gvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.390835 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.391315 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.391494 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.391547 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.393667 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.394805 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.395373 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.398704 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.400952 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.428032 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.455799 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-inventory" (OuterVolumeSpecName: "inventory") pod "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" (UID: "f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484385 4754 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484441 4754 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484456 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484473 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484525 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7gvm\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-kube-api-access-z7gvm\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484538 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484549 4754 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484562 4754 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484575 4754 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484589 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484600 4754 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484631 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484644 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.484659 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.767863 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" event={"ID":"f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc","Type":"ContainerDied","Data":"19664282ffb7ec35a4931d5da6ae74301e013a951425560d36bbe4b9ffe1ed85"} Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.767922 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19664282ffb7ec35a4931d5da6ae74301e013a951425560d36bbe4b9ffe1ed85" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.768001 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-p7xgq" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.868660 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm"] Feb 18 19:52:29 crc kubenswrapper[4754]: E0218 19:52:29.869115 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.869136 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.869397 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="f671bbe8-8e7a-4aeb-aed9-ed72dc1df2fc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.870099 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.874742 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.874832 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.874743 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.875003 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.877616 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.886052 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm"] Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.993737 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.993928 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.994020 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm29\" (UniqueName: \"kubernetes.io/projected/0f42b7b9-612c-4c21-9fa8-6d211cb67695-kube-api-access-nvm29\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.994077 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:29 crc kubenswrapper[4754]: I0218 19:52:29.994126 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.095950 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.096436 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvm29\" (UniqueName: \"kubernetes.io/projected/0f42b7b9-612c-4c21-9fa8-6d211cb67695-kube-api-access-nvm29\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.096587 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.096742 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.097001 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.102986 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.103314 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.103521 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.126865 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.136408 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvm29\" (UniqueName: \"kubernetes.io/projected/0f42b7b9-612c-4c21-9fa8-6d211cb67695-kube-api-access-nvm29\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-x57zm\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.197776 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.732783 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm"] Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.747538 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:52:30 crc kubenswrapper[4754]: I0218 19:52:30.780123 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" event={"ID":"0f42b7b9-612c-4c21-9fa8-6d211cb67695","Type":"ContainerStarted","Data":"23366d52ed4274ffc82b32789bc81583014321c68fb0c3ec7676c2bb911e1dce"} Feb 18 19:52:31 crc kubenswrapper[4754]: I0218 19:52:31.798283 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" event={"ID":"0f42b7b9-612c-4c21-9fa8-6d211cb67695","Type":"ContainerStarted","Data":"c3cbe541a20f2deb1df699844cfc00ea16404d2e16519ce880862a1ae70259c5"} Feb 18 19:52:31 crc kubenswrapper[4754]: I0218 19:52:31.824875 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" podStartSLOduration=2.414993673 podStartE2EDuration="2.824856094s" podCreationTimestamp="2026-02-18 19:52:29 +0000 UTC" firstStartedPulling="2026-02-18 19:52:30.747073091 +0000 UTC m=+2053.197485897" lastFinishedPulling="2026-02-18 19:52:31.156935532 +0000 UTC m=+2053.607348318" observedRunningTime="2026-02-18 19:52:31.817601906 +0000 UTC m=+2054.268014722" watchObservedRunningTime="2026-02-18 19:52:31.824856094 +0000 UTC m=+2054.275268900" Feb 18 19:52:36 crc kubenswrapper[4754]: I0218 19:52:36.865053 4754 scope.go:117] "RemoveContainer" containerID="d48f6023a01e527e47d74bf99a9f03ad6d217b681d56a24df745f363f38fe2e4" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.448483 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkdjl"] Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.452202 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.483092 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkdjl"] Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.568065 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-catalog-content\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.568287 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk74p\" (UniqueName: \"kubernetes.io/projected/7724885c-b323-45ff-ab55-ec7ac63f7e9c-kube-api-access-vk74p\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.568525 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-utilities\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.671242 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-utilities\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.671406 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-catalog-content\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.671444 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk74p\" (UniqueName: \"kubernetes.io/projected/7724885c-b323-45ff-ab55-ec7ac63f7e9c-kube-api-access-vk74p\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.671717 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-utilities\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.671864 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-catalog-content\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.702428 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk74p\" (UniqueName: \"kubernetes.io/projected/7724885c-b323-45ff-ab55-ec7ac63f7e9c-kube-api-access-vk74p\") pod \"redhat-operators-nkdjl\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:57 crc kubenswrapper[4754]: I0218 19:52:57.791074 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:52:58 crc kubenswrapper[4754]: I0218 19:52:58.299903 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkdjl"] Feb 18 19:52:59 crc kubenswrapper[4754]: I0218 19:52:59.071930 4754 generic.go:334] "Generic (PLEG): container finished" podID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerID="09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c" exitCode=0 Feb 18 19:52:59 crc kubenswrapper[4754]: I0218 19:52:59.071977 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerDied","Data":"09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c"} Feb 18 19:52:59 crc kubenswrapper[4754]: I0218 19:52:59.072188 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerStarted","Data":"27395e3e9af5d9ff860028498201bf8787f59f93b5250343594d1d4503c56068"} Feb 18 19:53:02 crc kubenswrapper[4754]: I0218 19:53:02.105997 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerStarted","Data":"6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095"} Feb 18 19:53:03 crc kubenswrapper[4754]: I0218 19:53:03.116364 4754 generic.go:334] "Generic (PLEG): container finished" podID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerID="6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095" exitCode=0 Feb 18 19:53:03 crc kubenswrapper[4754]: I0218 19:53:03.116548 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerDied","Data":"6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095"} Feb 18 19:53:06 crc kubenswrapper[4754]: I0218 19:53:06.163432 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerStarted","Data":"9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91"} Feb 18 19:53:06 crc kubenswrapper[4754]: I0218 19:53:06.187930 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkdjl" podStartSLOduration=2.9706016650000002 podStartE2EDuration="9.187908295s" podCreationTimestamp="2026-02-18 19:52:57 +0000 UTC" firstStartedPulling="2026-02-18 19:52:59.073467683 +0000 UTC m=+2081.523880479" lastFinishedPulling="2026-02-18 19:53:05.290774273 +0000 UTC m=+2087.741187109" observedRunningTime="2026-02-18 19:53:06.182208557 +0000 UTC m=+2088.632621353" watchObservedRunningTime="2026-02-18 19:53:06.187908295 +0000 UTC m=+2088.638321111" Feb 18 19:53:07 crc kubenswrapper[4754]: I0218 19:53:07.791199 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:53:07 crc kubenswrapper[4754]: I0218 19:53:07.791510 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:53:08 crc kubenswrapper[4754]: I0218 19:53:08.835030 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkdjl" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="registry-server" probeResult="failure" output=< Feb 18 19:53:08 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 19:53:08 crc kubenswrapper[4754]: > Feb 18 19:53:17 crc kubenswrapper[4754]: I0218 19:53:17.869735 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:53:17 crc kubenswrapper[4754]: I0218 19:53:17.924698 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:53:18 crc kubenswrapper[4754]: I0218 19:53:18.112173 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkdjl"] Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.289210 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkdjl" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="registry-server" containerID="cri-o://9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91" gracePeriod=2 Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.756038 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.932798 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk74p\" (UniqueName: \"kubernetes.io/projected/7724885c-b323-45ff-ab55-ec7ac63f7e9c-kube-api-access-vk74p\") pod \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.932891 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-catalog-content\") pod \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.933051 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-utilities\") pod \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\" (UID: \"7724885c-b323-45ff-ab55-ec7ac63f7e9c\") " Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.933698 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-utilities" (OuterVolumeSpecName: "utilities") pod "7724885c-b323-45ff-ab55-ec7ac63f7e9c" (UID: "7724885c-b323-45ff-ab55-ec7ac63f7e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:19 crc kubenswrapper[4754]: I0218 19:53:19.938629 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7724885c-b323-45ff-ab55-ec7ac63f7e9c-kube-api-access-vk74p" (OuterVolumeSpecName: "kube-api-access-vk74p") pod "7724885c-b323-45ff-ab55-ec7ac63f7e9c" (UID: "7724885c-b323-45ff-ab55-ec7ac63f7e9c"). InnerVolumeSpecName "kube-api-access-vk74p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.036261 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk74p\" (UniqueName: \"kubernetes.io/projected/7724885c-b323-45ff-ab55-ec7ac63f7e9c-kube-api-access-vk74p\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.036580 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.073631 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7724885c-b323-45ff-ab55-ec7ac63f7e9c" (UID: "7724885c-b323-45ff-ab55-ec7ac63f7e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.139365 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7724885c-b323-45ff-ab55-ec7ac63f7e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.300794 4754 generic.go:334] "Generic (PLEG): container finished" podID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerID="9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91" exitCode=0 Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.300863 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkdjl" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.300904 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerDied","Data":"9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91"} Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.302056 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkdjl" event={"ID":"7724885c-b323-45ff-ab55-ec7ac63f7e9c","Type":"ContainerDied","Data":"27395e3e9af5d9ff860028498201bf8787f59f93b5250343594d1d4503c56068"} Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.302099 4754 scope.go:117] "RemoveContainer" containerID="9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.339843 4754 scope.go:117] "RemoveContainer" containerID="6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.342261 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkdjl"] Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.354504 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkdjl"] Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.361053 4754 scope.go:117] "RemoveContainer" containerID="09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.402738 4754 scope.go:117] "RemoveContainer" containerID="9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91" Feb 18 19:53:20 crc kubenswrapper[4754]: E0218 19:53:20.403364 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91\": container with ID starting with 9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91 not found: ID does not exist" containerID="9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.403437 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91"} err="failed to get container status \"9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91\": rpc error: code = NotFound desc = could not find container \"9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91\": container with ID starting with 9d9d73fc1947408726daf3b69f19ae4862e4ba4455f06b52f0e01ec73035fa91 not found: ID does not exist" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.403479 4754 scope.go:117] "RemoveContainer" containerID="6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095" Feb 18 19:53:20 crc kubenswrapper[4754]: E0218 19:53:20.403784 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095\": container with ID starting with 6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095 not found: ID does not exist" containerID="6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.403826 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095"} err="failed to get container status \"6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095\": rpc error: code = NotFound desc = could not find container \"6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095\": container with ID starting with 6bce1e37e2cf7f883e79488da075e6b6e3ddc554db00943252eed7978cb4a095 not found: ID does not exist" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.403852 4754 scope.go:117] "RemoveContainer" containerID="09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c" Feb 18 19:53:20 crc kubenswrapper[4754]: E0218 19:53:20.404227 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c\": container with ID starting with 09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c not found: ID does not exist" containerID="09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c" Feb 18 19:53:20 crc kubenswrapper[4754]: I0218 19:53:20.404255 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c"} err="failed to get container status \"09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c\": rpc error: code = NotFound desc = could not find container \"09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c\": container with ID starting with 09ddbe20bada718a4f697a6ef0ad3f414f3ff4063aa5b9265e9e094aae4a689c not found: ID does not exist" Feb 18 19:53:22 crc kubenswrapper[4754]: I0218 19:53:22.221037 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" path="/var/lib/kubelet/pods/7724885c-b323-45ff-ab55-ec7ac63f7e9c/volumes" Feb 18 19:53:31 crc kubenswrapper[4754]: I0218 19:53:31.428113 4754 generic.go:334] "Generic (PLEG): container finished" podID="0f42b7b9-612c-4c21-9fa8-6d211cb67695" containerID="c3cbe541a20f2deb1df699844cfc00ea16404d2e16519ce880862a1ae70259c5" exitCode=0 Feb 18 19:53:31 crc kubenswrapper[4754]: I0218 19:53:31.428202 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" event={"ID":"0f42b7b9-612c-4c21-9fa8-6d211cb67695","Type":"ContainerDied","Data":"c3cbe541a20f2deb1df699844cfc00ea16404d2e16519ce880862a1ae70259c5"} Feb 18 19:53:32 crc kubenswrapper[4754]: I0218 19:53:32.822796 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.003151 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ssh-key-openstack-edpm-ipam\") pod \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.003231 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-inventory\") pod \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.003322 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvm29\" (UniqueName: \"kubernetes.io/projected/0f42b7b9-612c-4c21-9fa8-6d211cb67695-kube-api-access-nvm29\") pod \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.003424 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovncontroller-config-0\") pod \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.003610 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovn-combined-ca-bundle\") pod \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\" (UID: \"0f42b7b9-612c-4c21-9fa8-6d211cb67695\") " Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.011264 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0f42b7b9-612c-4c21-9fa8-6d211cb67695" (UID: "0f42b7b9-612c-4c21-9fa8-6d211cb67695"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.011269 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f42b7b9-612c-4c21-9fa8-6d211cb67695-kube-api-access-nvm29" (OuterVolumeSpecName: "kube-api-access-nvm29") pod "0f42b7b9-612c-4c21-9fa8-6d211cb67695" (UID: "0f42b7b9-612c-4c21-9fa8-6d211cb67695"). InnerVolumeSpecName "kube-api-access-nvm29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.033797 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-inventory" (OuterVolumeSpecName: "inventory") pod "0f42b7b9-612c-4c21-9fa8-6d211cb67695" (UID: "0f42b7b9-612c-4c21-9fa8-6d211cb67695"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.034778 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f42b7b9-612c-4c21-9fa8-6d211cb67695" (UID: "0f42b7b9-612c-4c21-9fa8-6d211cb67695"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.039172 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0f42b7b9-612c-4c21-9fa8-6d211cb67695" (UID: "0f42b7b9-612c-4c21-9fa8-6d211cb67695"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.106619 4754 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.106666 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.106688 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f42b7b9-612c-4c21-9fa8-6d211cb67695-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.106701 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvm29\" (UniqueName: \"kubernetes.io/projected/0f42b7b9-612c-4c21-9fa8-6d211cb67695-kube-api-access-nvm29\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.106713 4754 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0f42b7b9-612c-4c21-9fa8-6d211cb67695-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.448945 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" event={"ID":"0f42b7b9-612c-4c21-9fa8-6d211cb67695","Type":"ContainerDied","Data":"23366d52ed4274ffc82b32789bc81583014321c68fb0c3ec7676c2bb911e1dce"} Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.449290 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23366d52ed4274ffc82b32789bc81583014321c68fb0c3ec7676c2bb911e1dce" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.449032 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-x57zm" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624006 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5"] Feb 18 19:53:33 crc kubenswrapper[4754]: E0218 19:53:33.624381 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f42b7b9-612c-4c21-9fa8-6d211cb67695" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624397 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f42b7b9-612c-4c21-9fa8-6d211cb67695" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:53:33 crc kubenswrapper[4754]: E0218 19:53:33.624418 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="registry-server" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624425 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="registry-server" Feb 18 19:53:33 crc kubenswrapper[4754]: E0218 19:53:33.624445 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="extract-utilities" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624451 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="extract-utilities" Feb 18 19:53:33 crc kubenswrapper[4754]: E0218 19:53:33.624466 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="extract-content" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624472 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="extract-content" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624678 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7724885c-b323-45ff-ab55-ec7ac63f7e9c" containerName="registry-server" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.624700 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f42b7b9-612c-4c21-9fa8-6d211cb67695" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.625307 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.629906 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.629948 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.629962 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.630210 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.630282 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.636222 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.640172 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5"] Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.718134 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.718201 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.718275 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.718295 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.718375 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpj4g\" (UniqueName: \"kubernetes.io/projected/7ab97baa-4033-4617-8859-ae9d36458034-kube-api-access-jpj4g\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.718435 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.820722 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.820815 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.820914 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpj4g\" (UniqueName: \"kubernetes.io/projected/7ab97baa-4033-4617-8859-ae9d36458034-kube-api-access-jpj4g\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.821108 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.821550 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.821629 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.825575 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.825980 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.826351 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.828957 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.833828 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.852732 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpj4g\" (UniqueName: \"kubernetes.io/projected/7ab97baa-4033-4617-8859-ae9d36458034-kube-api-access-jpj4g\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:33 crc kubenswrapper[4754]: I0218 19:53:33.983231 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:53:34 crc kubenswrapper[4754]: I0218 19:53:34.490032 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5"] Feb 18 19:53:35 crc kubenswrapper[4754]: I0218 19:53:35.474078 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" event={"ID":"7ab97baa-4033-4617-8859-ae9d36458034","Type":"ContainerStarted","Data":"7ae4a32bc52940fe15423c28a0f115862f3fc44e845b521fe7dba82b7bfad18c"} Feb 18 19:53:35 crc kubenswrapper[4754]: I0218 19:53:35.474623 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" event={"ID":"7ab97baa-4033-4617-8859-ae9d36458034","Type":"ContainerStarted","Data":"53cb71930632ceb27add663c310d3d061bfccf76f391eed90e168d0bfc794bfe"} Feb 18 19:53:35 crc kubenswrapper[4754]: I0218 19:53:35.495902 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" podStartSLOduration=2.079483708 podStartE2EDuration="2.495880164s" podCreationTimestamp="2026-02-18 19:53:33 +0000 UTC" firstStartedPulling="2026-02-18 19:53:34.496060296 +0000 UTC m=+2116.946473092" lastFinishedPulling="2026-02-18 19:53:34.912456752 +0000 UTC m=+2117.362869548" observedRunningTime="2026-02-18 19:53:35.494100139 +0000 UTC m=+2117.944512955" watchObservedRunningTime="2026-02-18 19:53:35.495880164 +0000 UTC m=+2117.946292960" Feb 18 19:53:38 crc kubenswrapper[4754]: I0218 19:53:38.096995 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:53:38 crc kubenswrapper[4754]: I0218 19:53:38.097352 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:48 crc kubenswrapper[4754]: I0218 19:53:48.986555 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w57rb"] Feb 18 19:53:48 crc kubenswrapper[4754]: I0218 19:53:48.990174 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.000871 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w57rb"] Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.160043 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-catalog-content\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.160202 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-utilities\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.160267 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2drh\" (UniqueName: \"kubernetes.io/projected/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-kube-api-access-q2drh\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.262395 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2drh\" (UniqueName: \"kubernetes.io/projected/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-kube-api-access-q2drh\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.262549 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-catalog-content\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.262661 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-utilities\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.263191 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-utilities\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.263283 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-catalog-content\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.282062 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2drh\" (UniqueName: \"kubernetes.io/projected/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-kube-api-access-q2drh\") pod \"community-operators-w57rb\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.312996 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:49 crc kubenswrapper[4754]: I0218 19:53:49.879899 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w57rb"] Feb 18 19:53:50 crc kubenswrapper[4754]: I0218 19:53:50.634912 4754 generic.go:334] "Generic (PLEG): container finished" podID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerID="7fb5a1fd787c257b311aff49969d111e7c8e6d8f9bfa02e23af913e0552ddeed" exitCode=0 Feb 18 19:53:50 crc kubenswrapper[4754]: I0218 19:53:50.635263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerDied","Data":"7fb5a1fd787c257b311aff49969d111e7c8e6d8f9bfa02e23af913e0552ddeed"} Feb 18 19:53:50 crc kubenswrapper[4754]: I0218 19:53:50.635290 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerStarted","Data":"b18547e84bfa48a340be0aea574302bdf8861c8f486dc87edef2d61a7942574c"} Feb 18 19:53:51 crc kubenswrapper[4754]: I0218 19:53:51.646699 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerStarted","Data":"fc38805242315d075b5afdd7d4a7a89358bab58eeeb6c985001d65a47f59ee09"} Feb 18 19:53:52 crc kubenswrapper[4754]: I0218 19:53:52.684771 4754 generic.go:334] "Generic (PLEG): container finished" podID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerID="fc38805242315d075b5afdd7d4a7a89358bab58eeeb6c985001d65a47f59ee09" exitCode=0 Feb 18 19:53:52 crc kubenswrapper[4754]: I0218 19:53:52.685820 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerDied","Data":"fc38805242315d075b5afdd7d4a7a89358bab58eeeb6c985001d65a47f59ee09"} Feb 18 19:53:53 crc kubenswrapper[4754]: I0218 19:53:53.696827 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerStarted","Data":"5cac2b295275013f3b398eef6cd73b1ab1039509a00e0afdf73a0d036e1d16da"} Feb 18 19:53:53 crc kubenswrapper[4754]: I0218 19:53:53.722981 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w57rb" podStartSLOduration=3.221541106 podStartE2EDuration="5.722962405s" podCreationTimestamp="2026-02-18 19:53:48 +0000 UTC" firstStartedPulling="2026-02-18 19:53:50.637738182 +0000 UTC m=+2133.088150978" lastFinishedPulling="2026-02-18 19:53:53.139159471 +0000 UTC m=+2135.589572277" observedRunningTime="2026-02-18 19:53:53.715439271 +0000 UTC m=+2136.165852067" watchObservedRunningTime="2026-02-18 19:53:53.722962405 +0000 UTC m=+2136.173375201" Feb 18 19:53:59 crc kubenswrapper[4754]: I0218 19:53:59.313603 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:59 crc kubenswrapper[4754]: I0218 19:53:59.314432 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:59 crc kubenswrapper[4754]: I0218 19:53:59.393079 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:53:59 crc kubenswrapper[4754]: I0218 19:53:59.818621 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:54:00 crc kubenswrapper[4754]: I0218 19:54:00.155416 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w57rb"] Feb 18 19:54:01 crc kubenswrapper[4754]: I0218 19:54:01.794951 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w57rb" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="registry-server" containerID="cri-o://5cac2b295275013f3b398eef6cd73b1ab1039509a00e0afdf73a0d036e1d16da" gracePeriod=2 Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.806986 4754 generic.go:334] "Generic (PLEG): container finished" podID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerID="5cac2b295275013f3b398eef6cd73b1ab1039509a00e0afdf73a0d036e1d16da" exitCode=0 Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.807050 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerDied","Data":"5cac2b295275013f3b398eef6cd73b1ab1039509a00e0afdf73a0d036e1d16da"} Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.807533 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w57rb" event={"ID":"0aeccd35-de4d-4517-a1fa-4b80d238a3c1","Type":"ContainerDied","Data":"b18547e84bfa48a340be0aea574302bdf8861c8f486dc87edef2d61a7942574c"} Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.807553 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18547e84bfa48a340be0aea574302bdf8861c8f486dc87edef2d61a7942574c" Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.859062 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.955237 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-utilities\") pod \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.955306 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-catalog-content\") pod \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.955432 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2drh\" (UniqueName: \"kubernetes.io/projected/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-kube-api-access-q2drh\") pod \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\" (UID: \"0aeccd35-de4d-4517-a1fa-4b80d238a3c1\") " Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.957084 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-utilities" (OuterVolumeSpecName: "utilities") pod "0aeccd35-de4d-4517-a1fa-4b80d238a3c1" (UID: "0aeccd35-de4d-4517-a1fa-4b80d238a3c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:02 crc kubenswrapper[4754]: I0218 19:54:02.970472 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-kube-api-access-q2drh" (OuterVolumeSpecName: "kube-api-access-q2drh") pod "0aeccd35-de4d-4517-a1fa-4b80d238a3c1" (UID: "0aeccd35-de4d-4517-a1fa-4b80d238a3c1"). InnerVolumeSpecName "kube-api-access-q2drh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.017751 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0aeccd35-de4d-4517-a1fa-4b80d238a3c1" (UID: "0aeccd35-de4d-4517-a1fa-4b80d238a3c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.058581 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.058625 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2drh\" (UniqueName: \"kubernetes.io/projected/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-kube-api-access-q2drh\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.058637 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aeccd35-de4d-4517-a1fa-4b80d238a3c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.816174 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w57rb" Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.849715 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w57rb"] Feb 18 19:54:03 crc kubenswrapper[4754]: I0218 19:54:03.859213 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w57rb"] Feb 18 19:54:04 crc kubenswrapper[4754]: I0218 19:54:04.222120 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" path="/var/lib/kubelet/pods/0aeccd35-de4d-4517-a1fa-4b80d238a3c1/volumes" Feb 18 19:54:08 crc kubenswrapper[4754]: I0218 19:54:08.096952 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:54:08 crc kubenswrapper[4754]: I0218 19:54:08.097540 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:54:21 crc kubenswrapper[4754]: I0218 19:54:21.012185 4754 generic.go:334] "Generic (PLEG): container finished" podID="7ab97baa-4033-4617-8859-ae9d36458034" containerID="7ae4a32bc52940fe15423c28a0f115862f3fc44e845b521fe7dba82b7bfad18c" exitCode=0 Feb 18 19:54:21 crc kubenswrapper[4754]: I0218 19:54:21.012314 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" event={"ID":"7ab97baa-4033-4617-8859-ae9d36458034","Type":"ContainerDied","Data":"7ae4a32bc52940fe15423c28a0f115862f3fc44e845b521fe7dba82b7bfad18c"} Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.483648 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.661450 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7ab97baa-4033-4617-8859-ae9d36458034\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.661842 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-metadata-combined-ca-bundle\") pod \"7ab97baa-4033-4617-8859-ae9d36458034\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.661889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-ssh-key-openstack-edpm-ipam\") pod \"7ab97baa-4033-4617-8859-ae9d36458034\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.661956 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpj4g\" (UniqueName: \"kubernetes.io/projected/7ab97baa-4033-4617-8859-ae9d36458034-kube-api-access-jpj4g\") pod \"7ab97baa-4033-4617-8859-ae9d36458034\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.661982 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-inventory\") pod \"7ab97baa-4033-4617-8859-ae9d36458034\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.662016 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-nova-metadata-neutron-config-0\") pod \"7ab97baa-4033-4617-8859-ae9d36458034\" (UID: \"7ab97baa-4033-4617-8859-ae9d36458034\") " Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.667421 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7ab97baa-4033-4617-8859-ae9d36458034" (UID: "7ab97baa-4033-4617-8859-ae9d36458034"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.667591 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab97baa-4033-4617-8859-ae9d36458034-kube-api-access-jpj4g" (OuterVolumeSpecName: "kube-api-access-jpj4g") pod "7ab97baa-4033-4617-8859-ae9d36458034" (UID: "7ab97baa-4033-4617-8859-ae9d36458034"). InnerVolumeSpecName "kube-api-access-jpj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.694733 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ab97baa-4033-4617-8859-ae9d36458034" (UID: "7ab97baa-4033-4617-8859-ae9d36458034"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.695392 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-inventory" (OuterVolumeSpecName: "inventory") pod "7ab97baa-4033-4617-8859-ae9d36458034" (UID: "7ab97baa-4033-4617-8859-ae9d36458034"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.696341 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7ab97baa-4033-4617-8859-ae9d36458034" (UID: "7ab97baa-4033-4617-8859-ae9d36458034"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.697471 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7ab97baa-4033-4617-8859-ae9d36458034" (UID: "7ab97baa-4033-4617-8859-ae9d36458034"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.764284 4754 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.764308 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.764320 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpj4g\" (UniqueName: \"kubernetes.io/projected/7ab97baa-4033-4617-8859-ae9d36458034-kube-api-access-jpj4g\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.764329 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.764339 4754 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:22 crc kubenswrapper[4754]: I0218 19:54:22.764347 4754 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ab97baa-4033-4617-8859-ae9d36458034-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.034775 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" event={"ID":"7ab97baa-4033-4617-8859-ae9d36458034","Type":"ContainerDied","Data":"53cb71930632ceb27add663c310d3d061bfccf76f391eed90e168d0bfc794bfe"} Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.034813 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53cb71930632ceb27add663c310d3d061bfccf76f391eed90e168d0bfc794bfe" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.034859 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mfxk5" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.138554 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv"] Feb 18 19:54:23 crc kubenswrapper[4754]: E0218 19:54:23.138898 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="registry-server" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.138913 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="registry-server" Feb 18 19:54:23 crc kubenswrapper[4754]: E0218 19:54:23.138928 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="extract-utilities" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.138935 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="extract-utilities" Feb 18 19:54:23 crc kubenswrapper[4754]: E0218 19:54:23.138950 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="extract-content" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.138956 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="extract-content" Feb 18 19:54:23 crc kubenswrapper[4754]: E0218 19:54:23.138964 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab97baa-4033-4617-8859-ae9d36458034" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.138972 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab97baa-4033-4617-8859-ae9d36458034" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.139169 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aeccd35-de4d-4517-a1fa-4b80d238a3c1" containerName="registry-server" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.139178 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab97baa-4033-4617-8859-ae9d36458034" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.139769 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.142876 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.143607 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.143779 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.144032 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.144252 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.157494 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv"] Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.272498 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.272606 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.272647 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtx7\" (UniqueName: \"kubernetes.io/projected/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-kube-api-access-rqtx7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.272717 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.272772 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.374528 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.374963 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.375321 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.375470 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.375510 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqtx7\" (UniqueName: \"kubernetes.io/projected/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-kube-api-access-rqtx7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.378782 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.379279 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.380404 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.386967 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.406927 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqtx7\" (UniqueName: \"kubernetes.io/projected/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-kube-api-access-rqtx7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.468218 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:54:23 crc kubenswrapper[4754]: I0218 19:54:23.980497 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv"] Feb 18 19:54:24 crc kubenswrapper[4754]: I0218 19:54:24.042584 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" event={"ID":"db7b45c7-a9a5-44e3-a121-1c1c551cc0db","Type":"ContainerStarted","Data":"caf7a6c1bf1850832cf669f0127b2428a7b990298f216bd635b27b83b7112865"} Feb 18 19:54:25 crc kubenswrapper[4754]: I0218 19:54:25.056874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" event={"ID":"db7b45c7-a9a5-44e3-a121-1c1c551cc0db","Type":"ContainerStarted","Data":"36586fa25ef29c202efe3648314b3af2c446c618a0fcba250e2f570ba31934a9"} Feb 18 19:54:25 crc kubenswrapper[4754]: I0218 19:54:25.075934 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" podStartSLOduration=1.62616022 podStartE2EDuration="2.075910398s" podCreationTimestamp="2026-02-18 19:54:23 +0000 UTC" firstStartedPulling="2026-02-18 19:54:23.986436529 +0000 UTC m=+2166.436849325" lastFinishedPulling="2026-02-18 19:54:24.436186697 +0000 UTC m=+2166.886599503" observedRunningTime="2026-02-18 19:54:25.075695401 +0000 UTC m=+2167.526108207" watchObservedRunningTime="2026-02-18 19:54:25.075910398 +0000 UTC m=+2167.526323214" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.096949 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.099716 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.100021 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.101766 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.102212 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" gracePeriod=600 Feb 18 19:54:38 crc kubenswrapper[4754]: E0218 19:54:38.251560 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.864550 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s2cnt"] Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.867300 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.883699 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2cnt"] Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.995400 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-utilities\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.995488 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwv5s\" (UniqueName: \"kubernetes.io/projected/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-kube-api-access-nwv5s\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:38 crc kubenswrapper[4754]: I0218 19:54:38.995756 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-catalog-content\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.097956 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-utilities\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.099209 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwv5s\" (UniqueName: \"kubernetes.io/projected/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-kube-api-access-nwv5s\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.098671 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-utilities\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.099384 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-catalog-content\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.099834 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-catalog-content\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.118901 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwv5s\" (UniqueName: \"kubernetes.io/projected/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-kube-api-access-nwv5s\") pod \"redhat-marketplace-s2cnt\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.189771 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.221878 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" exitCode=0 Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.221937 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2"} Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.221994 4754 scope.go:117] "RemoveContainer" containerID="d9f2023f02567cdf6089106e2f4a1b2d50f661e61a8c391b007983e0df2635db" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.222776 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:54:39 crc kubenswrapper[4754]: E0218 19:54:39.223107 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:54:39 crc kubenswrapper[4754]: I0218 19:54:39.758878 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2cnt"] Feb 18 19:54:39 crc kubenswrapper[4754]: W0218 19:54:39.763952 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b66ed66_7fcb_4ac9_bf2b_a0e44a8dc581.slice/crio-1f588cc2fa0573c959af8ab68693e807eccfec2bf560b52adbce35d810a7ca60 WatchSource:0}: Error finding container 1f588cc2fa0573c959af8ab68693e807eccfec2bf560b52adbce35d810a7ca60: Status 404 returned error can't find the container with id 1f588cc2fa0573c959af8ab68693e807eccfec2bf560b52adbce35d810a7ca60 Feb 18 19:54:40 crc kubenswrapper[4754]: I0218 19:54:40.239934 4754 generic.go:334] "Generic (PLEG): container finished" podID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerID="56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7" exitCode=0 Feb 18 19:54:40 crc kubenswrapper[4754]: I0218 19:54:40.240067 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerDied","Data":"56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7"} Feb 18 19:54:40 crc kubenswrapper[4754]: I0218 19:54:40.240358 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerStarted","Data":"1f588cc2fa0573c959af8ab68693e807eccfec2bf560b52adbce35d810a7ca60"} Feb 18 19:54:41 crc kubenswrapper[4754]: I0218 19:54:41.259054 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerStarted","Data":"614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89"} Feb 18 19:54:42 crc kubenswrapper[4754]: I0218 19:54:42.277287 4754 generic.go:334] "Generic (PLEG): container finished" podID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerID="614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89" exitCode=0 Feb 18 19:54:42 crc kubenswrapper[4754]: I0218 19:54:42.277423 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerDied","Data":"614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89"} Feb 18 19:54:43 crc kubenswrapper[4754]: I0218 19:54:43.287441 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerStarted","Data":"751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378"} Feb 18 19:54:43 crc kubenswrapper[4754]: I0218 19:54:43.312580 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s2cnt" podStartSLOduration=2.8820392740000003 podStartE2EDuration="5.312564458s" podCreationTimestamp="2026-02-18 19:54:38 +0000 UTC" firstStartedPulling="2026-02-18 19:54:40.242435056 +0000 UTC m=+2182.692847862" lastFinishedPulling="2026-02-18 19:54:42.67296025 +0000 UTC m=+2185.123373046" observedRunningTime="2026-02-18 19:54:43.305473157 +0000 UTC m=+2185.755885953" watchObservedRunningTime="2026-02-18 19:54:43.312564458 +0000 UTC m=+2185.762977254" Feb 18 19:54:49 crc kubenswrapper[4754]: I0218 19:54:49.190769 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:49 crc kubenswrapper[4754]: I0218 19:54:49.191504 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:49 crc kubenswrapper[4754]: I0218 19:54:49.243337 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:49 crc kubenswrapper[4754]: I0218 19:54:49.410592 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:49 crc kubenswrapper[4754]: I0218 19:54:49.497392 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2cnt"] Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.377630 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s2cnt" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="registry-server" containerID="cri-o://751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378" gracePeriod=2 Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.844830 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.983248 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-catalog-content\") pod \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.983332 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwv5s\" (UniqueName: \"kubernetes.io/projected/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-kube-api-access-nwv5s\") pod \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.983568 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-utilities\") pod \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\" (UID: \"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581\") " Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.985337 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-utilities" (OuterVolumeSpecName: "utilities") pod "0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" (UID: "0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:51 crc kubenswrapper[4754]: I0218 19:54:51.993414 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-kube-api-access-nwv5s" (OuterVolumeSpecName: "kube-api-access-nwv5s") pod "0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" (UID: "0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581"). InnerVolumeSpecName "kube-api-access-nwv5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.086074 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.086326 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwv5s\" (UniqueName: \"kubernetes.io/projected/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-kube-api-access-nwv5s\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.101174 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" (UID: "0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.193560 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.392737 4754 generic.go:334] "Generic (PLEG): container finished" podID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerID="751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378" exitCode=0 Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.392815 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerDied","Data":"751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378"} Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.392850 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2cnt" event={"ID":"0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581","Type":"ContainerDied","Data":"1f588cc2fa0573c959af8ab68693e807eccfec2bf560b52adbce35d810a7ca60"} Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.392872 4754 scope.go:117] "RemoveContainer" containerID="751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.392881 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2cnt" Feb 18 19:54:52 crc kubenswrapper[4754]: E0218 19:54:52.396618 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b66ed66_7fcb_4ac9_bf2b_a0e44a8dc581.slice\": RecentStats: unable to find data in memory cache]" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.423885 4754 scope.go:117] "RemoveContainer" containerID="614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.435912 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2cnt"] Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.444460 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2cnt"] Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.449988 4754 scope.go:117] "RemoveContainer" containerID="56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.491607 4754 scope.go:117] "RemoveContainer" containerID="751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378" Feb 18 19:54:52 crc kubenswrapper[4754]: E0218 19:54:52.492029 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378\": container with ID starting with 751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378 not found: ID does not exist" containerID="751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.492065 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378"} err="failed to get container status \"751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378\": rpc error: code = NotFound desc = could not find container \"751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378\": container with ID starting with 751544395f06575de07d859148973b04e738755fac7c5a16b6ce05df36858378 not found: ID does not exist" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.492303 4754 scope.go:117] "RemoveContainer" containerID="614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89" Feb 18 19:54:52 crc kubenswrapper[4754]: E0218 19:54:52.492862 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89\": container with ID starting with 614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89 not found: ID does not exist" containerID="614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.492908 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89"} err="failed to get container status \"614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89\": rpc error: code = NotFound desc = could not find container \"614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89\": container with ID starting with 614ba0c7a94c2b5408ffe309ffd2a5ad527f8e80bcf08bbb1596e5b77c61bb89 not found: ID does not exist" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.492937 4754 scope.go:117] "RemoveContainer" containerID="56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7" Feb 18 19:54:52 crc kubenswrapper[4754]: E0218 19:54:52.493553 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7\": container with ID starting with 56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7 not found: ID does not exist" containerID="56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7" Feb 18 19:54:52 crc kubenswrapper[4754]: I0218 19:54:52.493681 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7"} err="failed to get container status \"56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7\": rpc error: code = NotFound desc = could not find container \"56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7\": container with ID starting with 56172b0a21c8e6dde0ba0c2b59fd9a160dc64f8d42f8c129f94bb5f275a4e6d7 not found: ID does not exist" Feb 18 19:54:53 crc kubenswrapper[4754]: I0218 19:54:53.209400 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:54:53 crc kubenswrapper[4754]: E0218 19:54:53.209892 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:54:54 crc kubenswrapper[4754]: I0218 19:54:54.224249 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" path="/var/lib/kubelet/pods/0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581/volumes" Feb 18 19:55:07 crc kubenswrapper[4754]: I0218 19:55:07.209740 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:55:07 crc kubenswrapper[4754]: E0218 19:55:07.211581 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:55:18 crc kubenswrapper[4754]: I0218 19:55:18.222684 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:55:18 crc kubenswrapper[4754]: E0218 19:55:18.224101 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:55:31 crc kubenswrapper[4754]: I0218 19:55:31.210052 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:55:31 crc kubenswrapper[4754]: E0218 19:55:31.210909 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:55:43 crc kubenswrapper[4754]: I0218 19:55:43.210527 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:55:43 crc kubenswrapper[4754]: E0218 19:55:43.211502 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:55:58 crc kubenswrapper[4754]: I0218 19:55:58.223352 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:55:58 crc kubenswrapper[4754]: E0218 19:55:58.224369 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:56:10 crc kubenswrapper[4754]: I0218 19:56:10.209999 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:56:10 crc kubenswrapper[4754]: E0218 19:56:10.210944 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:56:23 crc kubenswrapper[4754]: I0218 19:56:23.209906 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:56:23 crc kubenswrapper[4754]: E0218 19:56:23.210960 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:56:34 crc kubenswrapper[4754]: I0218 19:56:34.210387 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:56:34 crc kubenswrapper[4754]: E0218 19:56:34.211187 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:56:48 crc kubenswrapper[4754]: I0218 19:56:48.228625 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:56:48 crc kubenswrapper[4754]: E0218 19:56:48.229735 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:57:00 crc kubenswrapper[4754]: I0218 19:57:00.209977 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:57:00 crc kubenswrapper[4754]: E0218 19:57:00.211768 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:57:15 crc kubenswrapper[4754]: I0218 19:57:15.209835 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:57:15 crc kubenswrapper[4754]: E0218 19:57:15.210452 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:57:29 crc kubenswrapper[4754]: I0218 19:57:29.210157 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:57:29 crc kubenswrapper[4754]: E0218 19:57:29.210785 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:57:43 crc kubenswrapper[4754]: I0218 19:57:43.210550 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:57:43 crc kubenswrapper[4754]: E0218 19:57:43.211417 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:57:55 crc kubenswrapper[4754]: I0218 19:57:55.213183 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:57:55 crc kubenswrapper[4754]: E0218 19:57:55.213968 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:58:09 crc kubenswrapper[4754]: I0218 19:58:09.209765 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:58:09 crc kubenswrapper[4754]: E0218 19:58:09.210908 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:58:13 crc kubenswrapper[4754]: I0218 19:58:13.536758 4754 generic.go:334] "Generic (PLEG): container finished" podID="db7b45c7-a9a5-44e3-a121-1c1c551cc0db" containerID="36586fa25ef29c202efe3648314b3af2c446c618a0fcba250e2f570ba31934a9" exitCode=0 Feb 18 19:58:13 crc kubenswrapper[4754]: I0218 19:58:13.537263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" event={"ID":"db7b45c7-a9a5-44e3-a121-1c1c551cc0db","Type":"ContainerDied","Data":"36586fa25ef29c202efe3648314b3af2c446c618a0fcba250e2f570ba31934a9"} Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.028209 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.116098 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-inventory\") pod \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.116527 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-combined-ca-bundle\") pod \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.116578 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqtx7\" (UniqueName: \"kubernetes.io/projected/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-kube-api-access-rqtx7\") pod \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.117103 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-ssh-key-openstack-edpm-ipam\") pod \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.117209 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-secret-0\") pod \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\" (UID: \"db7b45c7-a9a5-44e3-a121-1c1c551cc0db\") " Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.121165 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-kube-api-access-rqtx7" (OuterVolumeSpecName: "kube-api-access-rqtx7") pod "db7b45c7-a9a5-44e3-a121-1c1c551cc0db" (UID: "db7b45c7-a9a5-44e3-a121-1c1c551cc0db"). InnerVolumeSpecName "kube-api-access-rqtx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.122313 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "db7b45c7-a9a5-44e3-a121-1c1c551cc0db" (UID: "db7b45c7-a9a5-44e3-a121-1c1c551cc0db"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.147187 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "db7b45c7-a9a5-44e3-a121-1c1c551cc0db" (UID: "db7b45c7-a9a5-44e3-a121-1c1c551cc0db"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.157611 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "db7b45c7-a9a5-44e3-a121-1c1c551cc0db" (UID: "db7b45c7-a9a5-44e3-a121-1c1c551cc0db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.161858 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-inventory" (OuterVolumeSpecName: "inventory") pod "db7b45c7-a9a5-44e3-a121-1c1c551cc0db" (UID: "db7b45c7-a9a5-44e3-a121-1c1c551cc0db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.220073 4754 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.220103 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.220112 4754 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.220123 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqtx7\" (UniqueName: \"kubernetes.io/projected/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-kube-api-access-rqtx7\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.220132 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/db7b45c7-a9a5-44e3-a121-1c1c551cc0db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.566198 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" event={"ID":"db7b45c7-a9a5-44e3-a121-1c1c551cc0db","Type":"ContainerDied","Data":"caf7a6c1bf1850832cf669f0127b2428a7b990298f216bd635b27b83b7112865"} Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.566254 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf7a6c1bf1850832cf669f0127b2428a7b990298f216bd635b27b83b7112865" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.566279 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jhmhv" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.745259 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8"] Feb 18 19:58:15 crc kubenswrapper[4754]: E0218 19:58:15.745782 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7b45c7-a9a5-44e3-a121-1c1c551cc0db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.745805 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7b45c7-a9a5-44e3-a121-1c1c551cc0db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:58:15 crc kubenswrapper[4754]: E0218 19:58:15.745835 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="registry-server" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.745854 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="registry-server" Feb 18 19:58:15 crc kubenswrapper[4754]: E0218 19:58:15.745876 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="extract-content" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.745884 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="extract-content" Feb 18 19:58:15 crc kubenswrapper[4754]: E0218 19:58:15.745905 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="extract-utilities" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.745914 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="extract-utilities" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.746182 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7b45c7-a9a5-44e3-a121-1c1c551cc0db" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.746221 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b66ed66-7fcb-4ac9-bf2b-a0e44a8dc581" containerName="registry-server" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.747088 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.751421 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.751903 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.752004 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.752660 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.752934 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.753709 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-bt6gd" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.753888 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.761051 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8"] Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829452 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829506 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829542 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829564 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829596 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829621 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829653 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.829817 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.830041 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.830099 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.830196 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwfx\" (UniqueName: \"kubernetes.io/projected/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-kube-api-access-kzwfx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.930964 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931013 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931046 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931075 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931104 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931126 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931184 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931206 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931230 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwfx\" (UniqueName: \"kubernetes.io/projected/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-kube-api-access-kzwfx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931250 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.931282 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.933988 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.937046 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.937679 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.937717 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.938206 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.938229 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.938394 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.938776 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.941019 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.941965 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:15 crc kubenswrapper[4754]: I0218 19:58:15.953675 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwfx\" (UniqueName: \"kubernetes.io/projected/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-kube-api-access-kzwfx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-k5pc8\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:16 crc kubenswrapper[4754]: I0218 19:58:16.090973 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 19:58:16 crc kubenswrapper[4754]: I0218 19:58:16.626382 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8"] Feb 18 19:58:16 crc kubenswrapper[4754]: I0218 19:58:16.637210 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:58:17 crc kubenswrapper[4754]: I0218 19:58:17.589991 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" event={"ID":"73dcedb8-7dbd-4d35-99d1-f11a2b7745be","Type":"ContainerStarted","Data":"187b2e3539392a8fcaf4610a0c685ae4604878c52125e7b67d9ed78a0d5cc5fe"} Feb 18 19:58:17 crc kubenswrapper[4754]: I0218 19:58:17.590378 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" event={"ID":"73dcedb8-7dbd-4d35-99d1-f11a2b7745be","Type":"ContainerStarted","Data":"77aa18f82332353b0cf3ebdc39ddf5eb11815ffe29943f9d5d25825c46a1fbcb"} Feb 18 19:58:17 crc kubenswrapper[4754]: I0218 19:58:17.619274 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" podStartSLOduration=2.178971325 podStartE2EDuration="2.619252206s" podCreationTimestamp="2026-02-18 19:58:15 +0000 UTC" firstStartedPulling="2026-02-18 19:58:16.636851513 +0000 UTC m=+2399.087264329" lastFinishedPulling="2026-02-18 19:58:17.077132414 +0000 UTC m=+2399.527545210" observedRunningTime="2026-02-18 19:58:17.609100461 +0000 UTC m=+2400.059513257" watchObservedRunningTime="2026-02-18 19:58:17.619252206 +0000 UTC m=+2400.069665002" Feb 18 19:58:22 crc kubenswrapper[4754]: I0218 19:58:22.210805 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:58:22 crc kubenswrapper[4754]: E0218 19:58:22.211593 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:58:36 crc kubenswrapper[4754]: I0218 19:58:36.210354 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:58:36 crc kubenswrapper[4754]: E0218 19:58:36.211060 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:58:51 crc kubenswrapper[4754]: I0218 19:58:51.210213 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:58:51 crc kubenswrapper[4754]: E0218 19:58:51.211002 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:59:02 crc kubenswrapper[4754]: I0218 19:59:02.211136 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:59:02 crc kubenswrapper[4754]: E0218 19:59:02.212068 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:59:15 crc kubenswrapper[4754]: I0218 19:59:15.213957 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:59:15 crc kubenswrapper[4754]: E0218 19:59:15.215833 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:59:27 crc kubenswrapper[4754]: I0218 19:59:27.210690 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:59:27 crc kubenswrapper[4754]: E0218 19:59:27.211656 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 19:59:40 crc kubenswrapper[4754]: I0218 19:59:40.210035 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 19:59:40 crc kubenswrapper[4754]: I0218 19:59:40.514960 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"8ad271beffae4d53604516072d7e3753e99a9dd5613a29dc4bf6ae71a7b3a58b"} Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.155674 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz"] Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.157819 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.161651 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.161811 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.185109 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz"] Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.248365 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txnx\" (UniqueName: \"kubernetes.io/projected/edb684bb-9943-4cd5-a4d4-208ee65d7adc-kube-api-access-5txnx\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.248470 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edb684bb-9943-4cd5-a4d4-208ee65d7adc-config-volume\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.248497 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/edb684bb-9943-4cd5-a4d4-208ee65d7adc-secret-volume\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.350514 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txnx\" (UniqueName: \"kubernetes.io/projected/edb684bb-9943-4cd5-a4d4-208ee65d7adc-kube-api-access-5txnx\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.351688 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edb684bb-9943-4cd5-a4d4-208ee65d7adc-config-volume\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.352637 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edb684bb-9943-4cd5-a4d4-208ee65d7adc-config-volume\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.352728 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/edb684bb-9943-4cd5-a4d4-208ee65d7adc-secret-volume\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.358453 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/edb684bb-9943-4cd5-a4d4-208ee65d7adc-secret-volume\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.366917 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txnx\" (UniqueName: \"kubernetes.io/projected/edb684bb-9943-4cd5-a4d4-208ee65d7adc-kube-api-access-5txnx\") pod \"collect-profiles-29524080-f54lz\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.478009 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:00 crc kubenswrapper[4754]: I0218 20:00:00.913950 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz"] Feb 18 20:00:00 crc kubenswrapper[4754]: W0218 20:00:00.924005 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb684bb_9943_4cd5_a4d4_208ee65d7adc.slice/crio-19bda1c9fabf652950263949b4ff7d754068efde3e2e813734b21507f3c61857 WatchSource:0}: Error finding container 19bda1c9fabf652950263949b4ff7d754068efde3e2e813734b21507f3c61857: Status 404 returned error can't find the container with id 19bda1c9fabf652950263949b4ff7d754068efde3e2e813734b21507f3c61857 Feb 18 20:00:01 crc kubenswrapper[4754]: I0218 20:00:01.745479 4754 generic.go:334] "Generic (PLEG): container finished" podID="edb684bb-9943-4cd5-a4d4-208ee65d7adc" containerID="3cde63ea6e0887f734ed371b461777f6f566a680dcbdb9c0e6c35d0513849590" exitCode=0 Feb 18 20:00:01 crc kubenswrapper[4754]: I0218 20:00:01.745550 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" event={"ID":"edb684bb-9943-4cd5-a4d4-208ee65d7adc","Type":"ContainerDied","Data":"3cde63ea6e0887f734ed371b461777f6f566a680dcbdb9c0e6c35d0513849590"} Feb 18 20:00:01 crc kubenswrapper[4754]: I0218 20:00:01.746034 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" event={"ID":"edb684bb-9943-4cd5-a4d4-208ee65d7adc","Type":"ContainerStarted","Data":"19bda1c9fabf652950263949b4ff7d754068efde3e2e813734b21507f3c61857"} Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.121389 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.210190 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5txnx\" (UniqueName: \"kubernetes.io/projected/edb684bb-9943-4cd5-a4d4-208ee65d7adc-kube-api-access-5txnx\") pod \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.210703 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edb684bb-9943-4cd5-a4d4-208ee65d7adc-config-volume\") pod \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.210802 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/edb684bb-9943-4cd5-a4d4-208ee65d7adc-secret-volume\") pod \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\" (UID: \"edb684bb-9943-4cd5-a4d4-208ee65d7adc\") " Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.211722 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edb684bb-9943-4cd5-a4d4-208ee65d7adc-config-volume" (OuterVolumeSpecName: "config-volume") pod "edb684bb-9943-4cd5-a4d4-208ee65d7adc" (UID: "edb684bb-9943-4cd5-a4d4-208ee65d7adc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.216353 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb684bb-9943-4cd5-a4d4-208ee65d7adc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "edb684bb-9943-4cd5-a4d4-208ee65d7adc" (UID: "edb684bb-9943-4cd5-a4d4-208ee65d7adc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.216391 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb684bb-9943-4cd5-a4d4-208ee65d7adc-kube-api-access-5txnx" (OuterVolumeSpecName: "kube-api-access-5txnx") pod "edb684bb-9943-4cd5-a4d4-208ee65d7adc" (UID: "edb684bb-9943-4cd5-a4d4-208ee65d7adc"). InnerVolumeSpecName "kube-api-access-5txnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.313231 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5txnx\" (UniqueName: \"kubernetes.io/projected/edb684bb-9943-4cd5-a4d4-208ee65d7adc-kube-api-access-5txnx\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.313271 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/edb684bb-9943-4cd5-a4d4-208ee65d7adc-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.313287 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/edb684bb-9943-4cd5-a4d4-208ee65d7adc-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.772289 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" event={"ID":"edb684bb-9943-4cd5-a4d4-208ee65d7adc","Type":"ContainerDied","Data":"19bda1c9fabf652950263949b4ff7d754068efde3e2e813734b21507f3c61857"} Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.772360 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19bda1c9fabf652950263949b4ff7d754068efde3e2e813734b21507f3c61857" Feb 18 20:00:03 crc kubenswrapper[4754]: I0218 20:00:03.772641 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz" Feb 18 20:00:04 crc kubenswrapper[4754]: I0218 20:00:04.224310 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s"] Feb 18 20:00:04 crc kubenswrapper[4754]: I0218 20:00:04.232701 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-8jz7s"] Feb 18 20:00:06 crc kubenswrapper[4754]: I0218 20:00:06.222039 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e" path="/var/lib/kubelet/pods/9f0a6bb4-edb7-46c5-9ce4-ab6e2475da3e/volumes" Feb 18 20:00:37 crc kubenswrapper[4754]: I0218 20:00:37.114091 4754 scope.go:117] "RemoveContainer" containerID="5cac2b295275013f3b398eef6cd73b1ab1039509a00e0afdf73a0d036e1d16da" Feb 18 20:00:37 crc kubenswrapper[4754]: I0218 20:00:37.144625 4754 scope.go:117] "RemoveContainer" containerID="fc38805242315d075b5afdd7d4a7a89358bab58eeeb6c985001d65a47f59ee09" Feb 18 20:00:37 crc kubenswrapper[4754]: I0218 20:00:37.165574 4754 scope.go:117] "RemoveContainer" containerID="229d35e3f0b5b7fc8f5f79e919807c8ffb8676cd073ac321bef9c8bea8700569" Feb 18 20:00:37 crc kubenswrapper[4754]: I0218 20:00:37.268770 4754 scope.go:117] "RemoveContainer" containerID="7fb5a1fd787c257b311aff49969d111e7c8e6d8f9bfa02e23af913e0552ddeed" Feb 18 20:00:45 crc kubenswrapper[4754]: I0218 20:00:45.189208 4754 generic.go:334] "Generic (PLEG): container finished" podID="73dcedb8-7dbd-4d35-99d1-f11a2b7745be" containerID="187b2e3539392a8fcaf4610a0c685ae4604878c52125e7b67d9ed78a0d5cc5fe" exitCode=0 Feb 18 20:00:45 crc kubenswrapper[4754]: I0218 20:00:45.189285 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" event={"ID":"73dcedb8-7dbd-4d35-99d1-f11a2b7745be","Type":"ContainerDied","Data":"187b2e3539392a8fcaf4610a0c685ae4604878c52125e7b67d9ed78a0d5cc5fe"} Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.167726 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.208114 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" event={"ID":"73dcedb8-7dbd-4d35-99d1-f11a2b7745be","Type":"ContainerDied","Data":"77aa18f82332353b0cf3ebdc39ddf5eb11815ffe29943f9d5d25825c46a1fbcb"} Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.208520 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77aa18f82332353b0cf3ebdc39ddf5eb11815ffe29943f9d5d25825c46a1fbcb" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.208208 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-k5pc8" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.247830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-1\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.247922 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-1\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.247950 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-ssh-key-openstack-edpm-ipam\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.248015 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-3\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.248031 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-2\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.248131 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwfx\" (UniqueName: \"kubernetes.io/projected/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-kube-api-access-kzwfx\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.249439 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-0\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.249688 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-combined-ca-bundle\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.249771 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-inventory\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.249890 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-0\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.250112 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-extra-config-0\") pod \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\" (UID: \"73dcedb8-7dbd-4d35-99d1-f11a2b7745be\") " Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.288668 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.295403 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-kube-api-access-kzwfx" (OuterVolumeSpecName: "kube-api-access-kzwfx") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "kube-api-access-kzwfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.310600 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.310666 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.324276 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.352669 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.354118 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.356290 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.361761 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p"] Feb 18 20:00:47 crc kubenswrapper[4754]: E0218 20:00:47.362267 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dcedb8-7dbd-4d35-99d1-f11a2b7745be" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.362281 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dcedb8-7dbd-4d35-99d1-f11a2b7745be" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:47 crc kubenswrapper[4754]: E0218 20:00:47.362298 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb684bb-9943-4cd5-a4d4-208ee65d7adc" containerName="collect-profiles" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.362306 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb684bb-9943-4cd5-a4d4-208ee65d7adc" containerName="collect-profiles" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.362538 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb684bb-9943-4cd5-a4d4-208ee65d7adc" containerName="collect-profiles" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.362570 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dcedb8-7dbd-4d35-99d1-f11a2b7745be" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.363446 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.367166 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.370897 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p"] Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371122 4754 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371154 4754 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371163 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371171 4754 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371179 4754 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371188 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwfx\" (UniqueName: \"kubernetes.io/projected/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-kube-api-access-kzwfx\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371196 4754 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.371203 4754 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.407159 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.408917 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.409931 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-inventory" (OuterVolumeSpecName: "inventory") pod "73dcedb8-7dbd-4d35-99d1-f11a2b7745be" (UID: "73dcedb8-7dbd-4d35-99d1-f11a2b7745be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.472729 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.472785 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.472815 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvthr\" (UniqueName: \"kubernetes.io/projected/be6b95b5-fca8-4071-a6be-3862bb055f7d-kube-api-access-lvthr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473017 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473108 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473221 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473395 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473540 4754 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473557 4754 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.473570 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73dcedb8-7dbd-4d35-99d1-f11a2b7745be-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575562 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575701 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575793 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575842 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575883 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvthr\" (UniqueName: \"kubernetes.io/projected/be6b95b5-fca8-4071-a6be-3862bb055f7d-kube-api-access-lvthr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.575967 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.579666 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.579946 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.580735 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.581037 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.581785 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.581795 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.592111 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvthr\" (UniqueName: \"kubernetes.io/projected/be6b95b5-fca8-4071-a6be-3862bb055f7d-kube-api-access-lvthr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:47 crc kubenswrapper[4754]: I0218 20:00:47.791562 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:00:48 crc kubenswrapper[4754]: I0218 20:00:48.342525 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p"] Feb 18 20:00:49 crc kubenswrapper[4754]: I0218 20:00:49.226184 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" event={"ID":"be6b95b5-fca8-4071-a6be-3862bb055f7d","Type":"ContainerStarted","Data":"8c4151c14741d020f1d5f7ed7dc92cc68a459e66a32c85cd1bcabe63f1784c36"} Feb 18 20:00:50 crc kubenswrapper[4754]: I0218 20:00:50.244031 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" event={"ID":"be6b95b5-fca8-4071-a6be-3862bb055f7d","Type":"ContainerStarted","Data":"594f3c687c4cde79c4bab13076db1e12a5113dca31dbfe4b3db72d4d3f9f0efb"} Feb 18 20:00:50 crc kubenswrapper[4754]: I0218 20:00:50.274404 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" podStartSLOduration=2.61907776 podStartE2EDuration="3.274384097s" podCreationTimestamp="2026-02-18 20:00:47 +0000 UTC" firstStartedPulling="2026-02-18 20:00:48.347894151 +0000 UTC m=+2550.798306947" lastFinishedPulling="2026-02-18 20:00:49.003200448 +0000 UTC m=+2551.453613284" observedRunningTime="2026-02-18 20:00:50.26963443 +0000 UTC m=+2552.720047226" watchObservedRunningTime="2026-02-18 20:00:50.274384097 +0000 UTC m=+2552.724796893" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.135227 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524081-8jxlf"] Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.137080 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.189955 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-8jxlf"] Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.237567 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-config-data\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.237655 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghbrf\" (UniqueName: \"kubernetes.io/projected/0b451f43-d3b8-48c0-a700-9e9df06a28cd-kube-api-access-ghbrf\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.237713 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-combined-ca-bundle\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.237871 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-fernet-keys\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.340440 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-config-data\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.340975 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghbrf\" (UniqueName: \"kubernetes.io/projected/0b451f43-d3b8-48c0-a700-9e9df06a28cd-kube-api-access-ghbrf\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.341041 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-combined-ca-bundle\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.341262 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-fernet-keys\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.347783 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-fernet-keys\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.348550 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-combined-ca-bundle\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.349259 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-config-data\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.363930 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghbrf\" (UniqueName: \"kubernetes.io/projected/0b451f43-d3b8-48c0-a700-9e9df06a28cd-kube-api-access-ghbrf\") pod \"keystone-cron-29524081-8jxlf\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.484287 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:00 crc kubenswrapper[4754]: I0218 20:01:00.937081 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-8jxlf"] Feb 18 20:01:01 crc kubenswrapper[4754]: I0218 20:01:01.346126 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-8jxlf" event={"ID":"0b451f43-d3b8-48c0-a700-9e9df06a28cd","Type":"ContainerStarted","Data":"7fe57bae3aefaf54ec208d6ff827acf81de5f02fed707bde28f8788d5b6d28ba"} Feb 18 20:01:01 crc kubenswrapper[4754]: I0218 20:01:01.346470 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-8jxlf" event={"ID":"0b451f43-d3b8-48c0-a700-9e9df06a28cd","Type":"ContainerStarted","Data":"4a8161e6d9d6b4aa8cd42e5a37b252d3a5b001070d841f7dbc289a5cc8f55e3a"} Feb 18 20:01:01 crc kubenswrapper[4754]: I0218 20:01:01.370240 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524081-8jxlf" podStartSLOduration=1.370218634 podStartE2EDuration="1.370218634s" podCreationTimestamp="2026-02-18 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:01.361771371 +0000 UTC m=+2563.812184177" watchObservedRunningTime="2026-02-18 20:01:01.370218634 +0000 UTC m=+2563.820631430" Feb 18 20:01:03 crc kubenswrapper[4754]: I0218 20:01:03.363775 4754 generic.go:334] "Generic (PLEG): container finished" podID="0b451f43-d3b8-48c0-a700-9e9df06a28cd" containerID="7fe57bae3aefaf54ec208d6ff827acf81de5f02fed707bde28f8788d5b6d28ba" exitCode=0 Feb 18 20:01:03 crc kubenswrapper[4754]: I0218 20:01:03.363874 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-8jxlf" event={"ID":"0b451f43-d3b8-48c0-a700-9e9df06a28cd","Type":"ContainerDied","Data":"7fe57bae3aefaf54ec208d6ff827acf81de5f02fed707bde28f8788d5b6d28ba"} Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.766374 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.835413 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghbrf\" (UniqueName: \"kubernetes.io/projected/0b451f43-d3b8-48c0-a700-9e9df06a28cd-kube-api-access-ghbrf\") pod \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.835601 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-combined-ca-bundle\") pod \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.835639 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-config-data\") pod \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.835730 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-fernet-keys\") pod \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\" (UID: \"0b451f43-d3b8-48c0-a700-9e9df06a28cd\") " Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.841040 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b451f43-d3b8-48c0-a700-9e9df06a28cd-kube-api-access-ghbrf" (OuterVolumeSpecName: "kube-api-access-ghbrf") pod "0b451f43-d3b8-48c0-a700-9e9df06a28cd" (UID: "0b451f43-d3b8-48c0-a700-9e9df06a28cd"). InnerVolumeSpecName "kube-api-access-ghbrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.842854 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b451f43-d3b8-48c0-a700-9e9df06a28cd" (UID: "0b451f43-d3b8-48c0-a700-9e9df06a28cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.862485 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b451f43-d3b8-48c0-a700-9e9df06a28cd" (UID: "0b451f43-d3b8-48c0-a700-9e9df06a28cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.894682 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-config-data" (OuterVolumeSpecName: "config-data") pod "0b451f43-d3b8-48c0-a700-9e9df06a28cd" (UID: "0b451f43-d3b8-48c0-a700-9e9df06a28cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.937917 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghbrf\" (UniqueName: \"kubernetes.io/projected/0b451f43-d3b8-48c0-a700-9e9df06a28cd-kube-api-access-ghbrf\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.937954 4754 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.937964 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:04 crc kubenswrapper[4754]: I0218 20:01:04.937972 4754 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b451f43-d3b8-48c0-a700-9e9df06a28cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4754]: I0218 20:01:05.385634 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-8jxlf" event={"ID":"0b451f43-d3b8-48c0-a700-9e9df06a28cd","Type":"ContainerDied","Data":"4a8161e6d9d6b4aa8cd42e5a37b252d3a5b001070d841f7dbc289a5cc8f55e3a"} Feb 18 20:01:05 crc kubenswrapper[4754]: I0218 20:01:05.385667 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-8jxlf" Feb 18 20:01:05 crc kubenswrapper[4754]: I0218 20:01:05.385683 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a8161e6d9d6b4aa8cd42e5a37b252d3a5b001070d841f7dbc289a5cc8f55e3a" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.695615 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d5b9v"] Feb 18 20:01:32 crc kubenswrapper[4754]: E0218 20:01:32.696614 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b451f43-d3b8-48c0-a700-9e9df06a28cd" containerName="keystone-cron" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.696631 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b451f43-d3b8-48c0-a700-9e9df06a28cd" containerName="keystone-cron" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.696885 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b451f43-d3b8-48c0-a700-9e9df06a28cd" containerName="keystone-cron" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.698339 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.714724 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5b9v"] Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.783759 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqlv\" (UniqueName: \"kubernetes.io/projected/e66694fa-7a75-4d81-9b7f-8d342fdef965-kube-api-access-7hqlv\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.783844 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-catalog-content\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.783969 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-utilities\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.885209 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-utilities\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.885363 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hqlv\" (UniqueName: \"kubernetes.io/projected/e66694fa-7a75-4d81-9b7f-8d342fdef965-kube-api-access-7hqlv\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.885425 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-catalog-content\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.885726 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-utilities\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.885977 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-catalog-content\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:32 crc kubenswrapper[4754]: I0218 20:01:32.905079 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hqlv\" (UniqueName: \"kubernetes.io/projected/e66694fa-7a75-4d81-9b7f-8d342fdef965-kube-api-access-7hqlv\") pod \"certified-operators-d5b9v\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:33 crc kubenswrapper[4754]: I0218 20:01:33.034331 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:33 crc kubenswrapper[4754]: I0218 20:01:33.519600 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d5b9v"] Feb 18 20:01:33 crc kubenswrapper[4754]: I0218 20:01:33.667706 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerStarted","Data":"68421da0875899f7574f3777c2b6b9a471357e79dc805f180ed05f70ca7c9feb"} Feb 18 20:01:34 crc kubenswrapper[4754]: I0218 20:01:34.683346 4754 generic.go:334] "Generic (PLEG): container finished" podID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerID="893b8d0b6824045f4b86b1bddadd1ea9791d4af6a72177e1b5cfbcb53eb10d64" exitCode=0 Feb 18 20:01:34 crc kubenswrapper[4754]: I0218 20:01:34.683582 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerDied","Data":"893b8d0b6824045f4b86b1bddadd1ea9791d4af6a72177e1b5cfbcb53eb10d64"} Feb 18 20:01:35 crc kubenswrapper[4754]: I0218 20:01:35.700227 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerStarted","Data":"d3d6e1dc2e3dc006d4bb49dfd7a36e0cac6ee1dabf069584c0cb07f1824700ba"} Feb 18 20:01:36 crc kubenswrapper[4754]: I0218 20:01:36.716275 4754 generic.go:334] "Generic (PLEG): container finished" podID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerID="d3d6e1dc2e3dc006d4bb49dfd7a36e0cac6ee1dabf069584c0cb07f1824700ba" exitCode=0 Feb 18 20:01:36 crc kubenswrapper[4754]: I0218 20:01:36.716395 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerDied","Data":"d3d6e1dc2e3dc006d4bb49dfd7a36e0cac6ee1dabf069584c0cb07f1824700ba"} Feb 18 20:01:37 crc kubenswrapper[4754]: I0218 20:01:37.730115 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerStarted","Data":"42d3fce528cbb9c10a80d3c373a42b3bd5f445c6b55c94984ffdbdd77e93ce4e"} Feb 18 20:01:37 crc kubenswrapper[4754]: I0218 20:01:37.755686 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d5b9v" podStartSLOduration=3.102496653 podStartE2EDuration="5.755667526s" podCreationTimestamp="2026-02-18 20:01:32 +0000 UTC" firstStartedPulling="2026-02-18 20:01:34.686916107 +0000 UTC m=+2597.137328913" lastFinishedPulling="2026-02-18 20:01:37.34008697 +0000 UTC m=+2599.790499786" observedRunningTime="2026-02-18 20:01:37.749844835 +0000 UTC m=+2600.200257681" watchObservedRunningTime="2026-02-18 20:01:37.755667526 +0000 UTC m=+2600.206080332" Feb 18 20:01:43 crc kubenswrapper[4754]: I0218 20:01:43.034762 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:43 crc kubenswrapper[4754]: I0218 20:01:43.035394 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:43 crc kubenswrapper[4754]: I0218 20:01:43.108370 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:43 crc kubenswrapper[4754]: I0218 20:01:43.848752 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:43 crc kubenswrapper[4754]: I0218 20:01:43.898297 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5b9v"] Feb 18 20:01:45 crc kubenswrapper[4754]: I0218 20:01:45.818174 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d5b9v" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="registry-server" containerID="cri-o://42d3fce528cbb9c10a80d3c373a42b3bd5f445c6b55c94984ffdbdd77e93ce4e" gracePeriod=2 Feb 18 20:01:46 crc kubenswrapper[4754]: I0218 20:01:46.828359 4754 generic.go:334] "Generic (PLEG): container finished" podID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerID="42d3fce528cbb9c10a80d3c373a42b3bd5f445c6b55c94984ffdbdd77e93ce4e" exitCode=0 Feb 18 20:01:46 crc kubenswrapper[4754]: I0218 20:01:46.828425 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerDied","Data":"42d3fce528cbb9c10a80d3c373a42b3bd5f445c6b55c94984ffdbdd77e93ce4e"} Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.384938 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.555015 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hqlv\" (UniqueName: \"kubernetes.io/projected/e66694fa-7a75-4d81-9b7f-8d342fdef965-kube-api-access-7hqlv\") pod \"e66694fa-7a75-4d81-9b7f-8d342fdef965\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.556301 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-utilities\") pod \"e66694fa-7a75-4d81-9b7f-8d342fdef965\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.556423 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-catalog-content\") pod \"e66694fa-7a75-4d81-9b7f-8d342fdef965\" (UID: \"e66694fa-7a75-4d81-9b7f-8d342fdef965\") " Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.557970 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-utilities" (OuterVolumeSpecName: "utilities") pod "e66694fa-7a75-4d81-9b7f-8d342fdef965" (UID: "e66694fa-7a75-4d81-9b7f-8d342fdef965"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.560259 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66694fa-7a75-4d81-9b7f-8d342fdef965-kube-api-access-7hqlv" (OuterVolumeSpecName: "kube-api-access-7hqlv") pod "e66694fa-7a75-4d81-9b7f-8d342fdef965" (UID: "e66694fa-7a75-4d81-9b7f-8d342fdef965"). InnerVolumeSpecName "kube-api-access-7hqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.610336 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e66694fa-7a75-4d81-9b7f-8d342fdef965" (UID: "e66694fa-7a75-4d81-9b7f-8d342fdef965"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.659162 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.659198 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hqlv\" (UniqueName: \"kubernetes.io/projected/e66694fa-7a75-4d81-9b7f-8d342fdef965-kube-api-access-7hqlv\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.659214 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66694fa-7a75-4d81-9b7f-8d342fdef965-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.841963 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d5b9v" event={"ID":"e66694fa-7a75-4d81-9b7f-8d342fdef965","Type":"ContainerDied","Data":"68421da0875899f7574f3777c2b6b9a471357e79dc805f180ed05f70ca7c9feb"} Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.842025 4754 scope.go:117] "RemoveContainer" containerID="42d3fce528cbb9c10a80d3c373a42b3bd5f445c6b55c94984ffdbdd77e93ce4e" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.842072 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d5b9v" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.899542 4754 scope.go:117] "RemoveContainer" containerID="d3d6e1dc2e3dc006d4bb49dfd7a36e0cac6ee1dabf069584c0cb07f1824700ba" Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.911076 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d5b9v"] Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.924352 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d5b9v"] Feb 18 20:01:47 crc kubenswrapper[4754]: I0218 20:01:47.936702 4754 scope.go:117] "RemoveContainer" containerID="893b8d0b6824045f4b86b1bddadd1ea9791d4af6a72177e1b5cfbcb53eb10d64" Feb 18 20:01:48 crc kubenswrapper[4754]: I0218 20:01:48.231475 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" path="/var/lib/kubelet/pods/e66694fa-7a75-4d81-9b7f-8d342fdef965/volumes" Feb 18 20:02:08 crc kubenswrapper[4754]: I0218 20:02:08.096937 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:02:08 crc kubenswrapper[4754]: I0218 20:02:08.097566 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:02:38 crc kubenswrapper[4754]: I0218 20:02:38.096340 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:02:38 crc kubenswrapper[4754]: I0218 20:02:38.096889 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:02:47 crc kubenswrapper[4754]: I0218 20:02:47.526033 4754 generic.go:334] "Generic (PLEG): container finished" podID="be6b95b5-fca8-4071-a6be-3862bb055f7d" containerID="594f3c687c4cde79c4bab13076db1e12a5113dca31dbfe4b3db72d4d3f9f0efb" exitCode=0 Feb 18 20:02:47 crc kubenswrapper[4754]: I0218 20:02:47.526163 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" event={"ID":"be6b95b5-fca8-4071-a6be-3862bb055f7d","Type":"ContainerDied","Data":"594f3c687c4cde79c4bab13076db1e12a5113dca31dbfe4b3db72d4d3f9f0efb"} Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.000575 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.134695 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-2\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.135083 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-1\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.135213 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-0\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.135341 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-telemetry-combined-ca-bundle\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.135492 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.135608 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvthr\" (UniqueName: \"kubernetes.io/projected/be6b95b5-fca8-4071-a6be-3862bb055f7d-kube-api-access-lvthr\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.135664 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ssh-key-openstack-edpm-ipam\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.142126 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.142598 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6b95b5-fca8-4071-a6be-3862bb055f7d-kube-api-access-lvthr" (OuterVolumeSpecName: "kube-api-access-lvthr") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "kube-api-access-lvthr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.169129 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.189216 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: E0218 20:02:49.189758 4754 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory podName:be6b95b5-fca8-4071-a6be-3862bb055f7d nodeName:}" failed. No retries permitted until 2026-02-18 20:02:49.689724216 +0000 UTC m=+2672.140137072 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d") : error deleting /var/lib/kubelet/pods/be6b95b5-fca8-4071-a6be-3862bb055f7d/volume-subpaths: remove /var/lib/kubelet/pods/be6b95b5-fca8-4071-a6be-3862bb055f7d/volume-subpaths: no such file or directory Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.190000 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.194813 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.241366 4754 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.241403 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvthr\" (UniqueName: \"kubernetes.io/projected/be6b95b5-fca8-4071-a6be-3862bb055f7d-kube-api-access-lvthr\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.241419 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.241431 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.241444 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.241458 4754 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.551694 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" event={"ID":"be6b95b5-fca8-4071-a6be-3862bb055f7d","Type":"ContainerDied","Data":"8c4151c14741d020f1d5f7ed7dc92cc68a459e66a32c85cd1bcabe63f1784c36"} Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.551744 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4151c14741d020f1d5f7ed7dc92cc68a459e66a32c85cd1bcabe63f1784c36" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.551805 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rnq2p" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.749618 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory\") pod \"be6b95b5-fca8-4071-a6be-3862bb055f7d\" (UID: \"be6b95b5-fca8-4071-a6be-3862bb055f7d\") " Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.752905 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory" (OuterVolumeSpecName: "inventory") pod "be6b95b5-fca8-4071-a6be-3862bb055f7d" (UID: "be6b95b5-fca8-4071-a6be-3862bb055f7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4754]: I0218 20:02:49.851537 4754 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be6b95b5-fca8-4071-a6be-3862bb055f7d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.585274 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tr94t"] Feb 18 20:03:04 crc kubenswrapper[4754]: E0218 20:03:04.586289 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="extract-utilities" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.586306 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="extract-utilities" Feb 18 20:03:04 crc kubenswrapper[4754]: E0218 20:03:04.586331 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="extract-content" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.586340 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="extract-content" Feb 18 20:03:04 crc kubenswrapper[4754]: E0218 20:03:04.586362 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="registry-server" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.586369 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="registry-server" Feb 18 20:03:04 crc kubenswrapper[4754]: E0218 20:03:04.586378 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6b95b5-fca8-4071-a6be-3862bb055f7d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.586385 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6b95b5-fca8-4071-a6be-3862bb055f7d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.586607 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66694fa-7a75-4d81-9b7f-8d342fdef965" containerName="registry-server" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.586623 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6b95b5-fca8-4071-a6be-3862bb055f7d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.588424 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.603822 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr94t"] Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.751705 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-catalog-content\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.751819 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfh7\" (UniqueName: \"kubernetes.io/projected/1e5250cf-6861-452c-9ce7-066722cab0c9-kube-api-access-nsfh7\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.751902 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-utilities\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.853476 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-utilities\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.854053 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-utilities\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.854183 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-catalog-content\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.854331 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfh7\" (UniqueName: \"kubernetes.io/projected/1e5250cf-6861-452c-9ce7-066722cab0c9-kube-api-access-nsfh7\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.854454 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-catalog-content\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.873644 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfh7\" (UniqueName: \"kubernetes.io/projected/1e5250cf-6861-452c-9ce7-066722cab0c9-kube-api-access-nsfh7\") pod \"redhat-operators-tr94t\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:04 crc kubenswrapper[4754]: I0218 20:03:04.918953 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:05 crc kubenswrapper[4754]: I0218 20:03:05.399908 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr94t"] Feb 18 20:03:05 crc kubenswrapper[4754]: W0218 20:03:05.408975 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e5250cf_6861_452c_9ce7_066722cab0c9.slice/crio-765c772b2a499389bd7ef51b0daa22b9134ce7834d1d40a202c3436c4a6b50c7 WatchSource:0}: Error finding container 765c772b2a499389bd7ef51b0daa22b9134ce7834d1d40a202c3436c4a6b50c7: Status 404 returned error can't find the container with id 765c772b2a499389bd7ef51b0daa22b9134ce7834d1d40a202c3436c4a6b50c7 Feb 18 20:03:05 crc kubenswrapper[4754]: I0218 20:03:05.722480 4754 generic.go:334] "Generic (PLEG): container finished" podID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerID="ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3" exitCode=0 Feb 18 20:03:05 crc kubenswrapper[4754]: I0218 20:03:05.724246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr94t" event={"ID":"1e5250cf-6861-452c-9ce7-066722cab0c9","Type":"ContainerDied","Data":"ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3"} Feb 18 20:03:05 crc kubenswrapper[4754]: I0218 20:03:05.724299 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr94t" event={"ID":"1e5250cf-6861-452c-9ce7-066722cab0c9","Type":"ContainerStarted","Data":"765c772b2a499389bd7ef51b0daa22b9134ce7834d1d40a202c3436c4a6b50c7"} Feb 18 20:03:07 crc kubenswrapper[4754]: I0218 20:03:07.742906 4754 generic.go:334] "Generic (PLEG): container finished" podID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerID="af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf" exitCode=0 Feb 18 20:03:07 crc kubenswrapper[4754]: I0218 20:03:07.742971 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr94t" event={"ID":"1e5250cf-6861-452c-9ce7-066722cab0c9","Type":"ContainerDied","Data":"af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf"} Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.096628 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.096896 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.096944 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.097648 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ad271beffae4d53604516072d7e3753e99a9dd5613a29dc4bf6ae71a7b3a58b"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.097707 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://8ad271beffae4d53604516072d7e3753e99a9dd5613a29dc4bf6ae71a7b3a58b" gracePeriod=600 Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.753723 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr94t" event={"ID":"1e5250cf-6861-452c-9ce7-066722cab0c9","Type":"ContainerStarted","Data":"491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953"} Feb 18 20:03:08 crc kubenswrapper[4754]: I0218 20:03:08.780271 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tr94t" podStartSLOduration=2.31071808 podStartE2EDuration="4.78024821s" podCreationTimestamp="2026-02-18 20:03:04 +0000 UTC" firstStartedPulling="2026-02-18 20:03:05.725593349 +0000 UTC m=+2688.176006145" lastFinishedPulling="2026-02-18 20:03:08.195123479 +0000 UTC m=+2690.645536275" observedRunningTime="2026-02-18 20:03:08.774815931 +0000 UTC m=+2691.225228727" watchObservedRunningTime="2026-02-18 20:03:08.78024821 +0000 UTC m=+2691.230661006" Feb 18 20:03:10 crc kubenswrapper[4754]: I0218 20:03:10.777682 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="8ad271beffae4d53604516072d7e3753e99a9dd5613a29dc4bf6ae71a7b3a58b" exitCode=0 Feb 18 20:03:10 crc kubenswrapper[4754]: I0218 20:03:10.777754 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"8ad271beffae4d53604516072d7e3753e99a9dd5613a29dc4bf6ae71a7b3a58b"} Feb 18 20:03:10 crc kubenswrapper[4754]: I0218 20:03:10.778280 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752"} Feb 18 20:03:10 crc kubenswrapper[4754]: I0218 20:03:10.778314 4754 scope.go:117] "RemoveContainer" containerID="37413b03b4b71ecb8eff5bcabf1c4d13ef9eb71541bf680aa4a0b3a4f340bda2" Feb 18 20:03:14 crc kubenswrapper[4754]: I0218 20:03:14.920198 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:14 crc kubenswrapper[4754]: I0218 20:03:14.920710 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:14 crc kubenswrapper[4754]: I0218 20:03:14.981958 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:15 crc kubenswrapper[4754]: I0218 20:03:15.906746 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:15 crc kubenswrapper[4754]: I0218 20:03:15.971354 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr94t"] Feb 18 20:03:17 crc kubenswrapper[4754]: I0218 20:03:17.856820 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tr94t" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="registry-server" containerID="cri-o://491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953" gracePeriod=2 Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.339361 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.460296 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-catalog-content\") pod \"1e5250cf-6861-452c-9ce7-066722cab0c9\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.460447 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-utilities\") pod \"1e5250cf-6861-452c-9ce7-066722cab0c9\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.460483 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsfh7\" (UniqueName: \"kubernetes.io/projected/1e5250cf-6861-452c-9ce7-066722cab0c9-kube-api-access-nsfh7\") pod \"1e5250cf-6861-452c-9ce7-066722cab0c9\" (UID: \"1e5250cf-6861-452c-9ce7-066722cab0c9\") " Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.461437 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-utilities" (OuterVolumeSpecName: "utilities") pod "1e5250cf-6861-452c-9ce7-066722cab0c9" (UID: "1e5250cf-6861-452c-9ce7-066722cab0c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.465607 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5250cf-6861-452c-9ce7-066722cab0c9-kube-api-access-nsfh7" (OuterVolumeSpecName: "kube-api-access-nsfh7") pod "1e5250cf-6861-452c-9ce7-066722cab0c9" (UID: "1e5250cf-6861-452c-9ce7-066722cab0c9"). InnerVolumeSpecName "kube-api-access-nsfh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.563325 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.563389 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsfh7\" (UniqueName: \"kubernetes.io/projected/1e5250cf-6861-452c-9ce7-066722cab0c9-kube-api-access-nsfh7\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.587622 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e5250cf-6861-452c-9ce7-066722cab0c9" (UID: "1e5250cf-6861-452c-9ce7-066722cab0c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.666380 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e5250cf-6861-452c-9ce7-066722cab0c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.875549 4754 generic.go:334] "Generic (PLEG): container finished" podID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerID="491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953" exitCode=0 Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.875596 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr94t" event={"ID":"1e5250cf-6861-452c-9ce7-066722cab0c9","Type":"ContainerDied","Data":"491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953"} Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.875637 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr94t" event={"ID":"1e5250cf-6861-452c-9ce7-066722cab0c9","Type":"ContainerDied","Data":"765c772b2a499389bd7ef51b0daa22b9134ce7834d1d40a202c3436c4a6b50c7"} Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.875661 4754 scope.go:117] "RemoveContainer" containerID="491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.875754 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr94t" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.910217 4754 scope.go:117] "RemoveContainer" containerID="af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.910916 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr94t"] Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.923770 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tr94t"] Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.932347 4754 scope.go:117] "RemoveContainer" containerID="ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.978835 4754 scope.go:117] "RemoveContainer" containerID="491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953" Feb 18 20:03:18 crc kubenswrapper[4754]: E0218 20:03:18.979256 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953\": container with ID starting with 491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953 not found: ID does not exist" containerID="491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.979287 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953"} err="failed to get container status \"491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953\": rpc error: code = NotFound desc = could not find container \"491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953\": container with ID starting with 491b1e9cb67c94e15f4ef2abe59876e245671a83664a71409377422472fed953 not found: ID does not exist" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.979306 4754 scope.go:117] "RemoveContainer" containerID="af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf" Feb 18 20:03:18 crc kubenswrapper[4754]: E0218 20:03:18.980179 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf\": container with ID starting with af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf not found: ID does not exist" containerID="af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.980223 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf"} err="failed to get container status \"af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf\": rpc error: code = NotFound desc = could not find container \"af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf\": container with ID starting with af3cd4248f8b6b71e69813875070bc0a1d374bfdd6abaee4d1ad2477320419bf not found: ID does not exist" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.980253 4754 scope.go:117] "RemoveContainer" containerID="ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3" Feb 18 20:03:18 crc kubenswrapper[4754]: E0218 20:03:18.981180 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3\": container with ID starting with ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3 not found: ID does not exist" containerID="ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3" Feb 18 20:03:18 crc kubenswrapper[4754]: I0218 20:03:18.981273 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3"} err="failed to get container status \"ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3\": rpc error: code = NotFound desc = could not find container \"ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3\": container with ID starting with ed3ed695a37b27d7bdaa9e182e8ea994ae0ca3400997b24a96011d03399185b3 not found: ID does not exist" Feb 18 20:03:20 crc kubenswrapper[4754]: I0218 20:03:20.241786 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" path="/var/lib/kubelet/pods/1e5250cf-6861-452c-9ce7-066722cab0c9/volumes" Feb 18 20:03:33 crc kubenswrapper[4754]: I0218 20:03:33.342181 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:03:33 crc kubenswrapper[4754]: I0218 20:03:33.342814 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="prometheus" containerID="cri-o://2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" gracePeriod=600 Feb 18 20:03:33 crc kubenswrapper[4754]: I0218 20:03:33.342941 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="thanos-sidecar" containerID="cri-o://f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" gracePeriod=600 Feb 18 20:03:33 crc kubenswrapper[4754]: I0218 20:03:33.343018 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="config-reloader" containerID="cri-o://3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" gracePeriod=600 Feb 18 20:03:33 crc kubenswrapper[4754]: I0218 20:03:33.407084 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.144:9090/-/ready\": dial tcp 10.217.0.144:9090: connect: connection refused" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.019881 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035824 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerID="f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" exitCode=0 Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035879 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerID="3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" exitCode=0 Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035889 4754 generic.go:334] "Generic (PLEG): container finished" podID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerID="2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" exitCode=0 Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035905 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerDied","Data":"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d"} Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035967 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerDied","Data":"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66"} Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035981 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerDied","Data":"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a"} Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.035994 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0d8175e-d7c0-49e6-bce1-770d2dac9b74","Type":"ContainerDied","Data":"fa2bdedc70a58100b582f1e211e8a10d7182c18490a4112552a7bbadbf2a9aa9"} Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.036013 4754 scope.go:117] "RemoveContainer" containerID="f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.036339 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.060064 4754 scope.go:117] "RemoveContainer" containerID="3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.091811 4754 scope.go:117] "RemoveContainer" containerID="2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105070 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105307 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105483 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-tls-assets\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105520 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105561 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config-out\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105589 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf9xx\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-kube-api-access-bf9xx\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105645 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105684 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105753 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-2\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105803 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-secret-combined-ca-bundle\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105830 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-0\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105859 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-thanos-prometheus-http-client-file\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.105911 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-1\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.107649 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.109592 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.112291 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.113940 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-kube-api-access-bf9xx" (OuterVolumeSpecName: "kube-api-access-bf9xx") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "kube-api-access-bf9xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.115208 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.116381 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.121379 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config-out" (OuterVolumeSpecName: "config-out") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.121417 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.121458 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config" (OuterVolumeSpecName: "config") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.121475 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.121497 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.127926 4754 scope.go:117] "RemoveContainer" containerID="783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.159775 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.163276 4754 scope.go:117] "RemoveContainer" containerID="f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.168873 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": container with ID starting with f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d not found: ID does not exist" containerID="f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.168917 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d"} err="failed to get container status \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": rpc error: code = NotFound desc = could not find container \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": container with ID starting with f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.168944 4754 scope.go:117] "RemoveContainer" containerID="3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.170019 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": container with ID starting with 3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66 not found: ID does not exist" containerID="3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.170042 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66"} err="failed to get container status \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": rpc error: code = NotFound desc = could not find container \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": container with ID starting with 3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66 not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.170074 4754 scope.go:117] "RemoveContainer" containerID="2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.170500 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": container with ID starting with 2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a not found: ID does not exist" containerID="2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.170543 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a"} err="failed to get container status \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": rpc error: code = NotFound desc = could not find container \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": container with ID starting with 2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.170559 4754 scope.go:117] "RemoveContainer" containerID="783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.175012 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": container with ID starting with 783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14 not found: ID does not exist" containerID="783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.175067 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14"} err="failed to get container status \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": rpc error: code = NotFound desc = could not find container \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": container with ID starting with 783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14 not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.175100 4754 scope.go:117] "RemoveContainer" containerID="f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.175643 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d"} err="failed to get container status \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": rpc error: code = NotFound desc = could not find container \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": container with ID starting with f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.175686 4754 scope.go:117] "RemoveContainer" containerID="3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.186397 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66"} err="failed to get container status \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": rpc error: code = NotFound desc = could not find container \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": container with ID starting with 3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66 not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.186463 4754 scope.go:117] "RemoveContainer" containerID="2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.187015 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a"} err="failed to get container status \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": rpc error: code = NotFound desc = could not find container \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": container with ID starting with 2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.187044 4754 scope.go:117] "RemoveContainer" containerID="783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.188792 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14"} err="failed to get container status \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": rpc error: code = NotFound desc = could not find container \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": container with ID starting with 783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14 not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.188821 4754 scope.go:117] "RemoveContainer" containerID="f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.189091 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d"} err="failed to get container status \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": rpc error: code = NotFound desc = could not find container \"f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d\": container with ID starting with f5e685f84bf288b9304091b855431d8f4125dcb4fe70231aea78a757df52db5d not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.189110 4754 scope.go:117] "RemoveContainer" containerID="3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.189711 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66"} err="failed to get container status \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": rpc error: code = NotFound desc = could not find container \"3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66\": container with ID starting with 3749e36f42ea40d4b9839b9ede9978e077e9dce1df6f9fb4cc43c1278b9f3a66 not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.189731 4754 scope.go:117] "RemoveContainer" containerID="2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.189997 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a"} err="failed to get container status \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": rpc error: code = NotFound desc = could not find container \"2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a\": container with ID starting with 2a890ccf7fc09a93bacf6196fc0cdbdc6ecb6a19ec5d81f8cd77e5cf81b6d84a not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.190055 4754 scope.go:117] "RemoveContainer" containerID="783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.190963 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14"} err="failed to get container status \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": rpc error: code = NotFound desc = could not find container \"783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14\": container with ID starting with 783517e9fd9b85a8f06baee9385eac9e311241cd9661157e6caaf7cd4f56bd14 not found: ID does not exist" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.207205 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config" (OuterVolumeSpecName: "web-config") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.207971 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config\") pod \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\" (UID: \"b0d8175e-d7c0-49e6-bce1-770d2dac9b74\") " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208387 4754 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208407 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208418 4754 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208427 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208440 4754 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208448 4754 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208457 4754 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208481 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") on node \"crc\" " Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208492 4754 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208501 4754 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208509 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf9xx\" (UniqueName: \"kubernetes.io/projected/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-kube-api-access-bf9xx\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.208519 4754 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: W0218 20:03:34.213371 4754 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b0d8175e-d7c0-49e6-bce1-770d2dac9b74/volumes/kubernetes.io~secret/web-config Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.213415 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config" (OuterVolumeSpecName: "web-config") pod "b0d8175e-d7c0-49e6-bce1-770d2dac9b74" (UID: "b0d8175e-d7c0-49e6-bce1-770d2dac9b74"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.238504 4754 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.238631 4754 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6") on node "crc" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.310714 4754 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0d8175e-d7c0-49e6-bce1-770d2dac9b74-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.310957 4754 reconciler_common.go:293] "Volume detached for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") on node \"crc\" DevicePath \"\"" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.357915 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.366897 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.386921 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387386 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="init-config-reloader" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387429 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="init-config-reloader" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387444 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="registry-server" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387452 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="registry-server" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387467 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="prometheus" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387475 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="prometheus" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387490 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="config-reloader" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387496 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="config-reloader" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387513 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="thanos-sidecar" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387521 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="thanos-sidecar" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387542 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="extract-content" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387549 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="extract-content" Feb 18 20:03:34 crc kubenswrapper[4754]: E0218 20:03:34.387581 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="extract-utilities" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387589 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="extract-utilities" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387821 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5250cf-6861-452c-9ce7-066722cab0c9" containerName="registry-server" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387841 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="prometheus" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387863 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="thanos-sidecar" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.387872 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" containerName="config-reloader" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.390199 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.394218 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.402286 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.403023 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.403214 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.403423 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jgskl" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.403871 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.404296 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.404852 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.406729 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412199 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412248 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412271 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412304 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-config\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412329 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/672b508d-bc50-4ce2-8638-452e41f15a39-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412351 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412409 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412440 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412513 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412589 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/672b508d-bc50-4ce2-8638-452e41f15a39-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412641 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrmk\" (UniqueName: \"kubernetes.io/projected/672b508d-bc50-4ce2-8638-452e41f15a39-kube-api-access-mnrmk\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412672 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.412700 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514276 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/672b508d-bc50-4ce2-8638-452e41f15a39-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514321 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrmk\" (UniqueName: \"kubernetes.io/projected/672b508d-bc50-4ce2-8638-452e41f15a39-kube-api-access-mnrmk\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514352 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514392 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514421 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514460 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514481 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514518 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-config\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514546 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/672b508d-bc50-4ce2-8638-452e41f15a39-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514567 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514607 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514632 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.514674 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.515809 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.516232 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.518018 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/672b508d-bc50-4ce2-8638-452e41f15a39-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.519299 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/672b508d-bc50-4ce2-8638-452e41f15a39-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.519372 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.519722 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.520771 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-config\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.520992 4754 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.521032 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/81ceb4b204ea14622a6725e3273bbd9693392e71b58bdedf5b3f6ad4f339a7ba/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.521299 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/672b508d-bc50-4ce2-8638-452e41f15a39-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.521483 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.521607 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.523924 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/672b508d-bc50-4ce2-8638-452e41f15a39-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.532309 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrmk\" (UniqueName: \"kubernetes.io/projected/672b508d-bc50-4ce2-8638-452e41f15a39-kube-api-access-mnrmk\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.564274 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a33fabb5-95bc-4fc2-88f9-b67b7e0b65a6\") pod \"prometheus-metric-storage-0\" (UID: \"672b508d-bc50-4ce2-8638-452e41f15a39\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:34 crc kubenswrapper[4754]: I0218 20:03:34.715004 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:03:35 crc kubenswrapper[4754]: I0218 20:03:35.238053 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:03:36 crc kubenswrapper[4754]: I0218 20:03:36.052915 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"672b508d-bc50-4ce2-8638-452e41f15a39","Type":"ContainerStarted","Data":"b3641ae758ebbf41007ce6e2e09c14d0ce94d7968d3905f8295aa4ecd94776c7"} Feb 18 20:03:36 crc kubenswrapper[4754]: I0218 20:03:36.220761 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d8175e-d7c0-49e6-bce1-770d2dac9b74" path="/var/lib/kubelet/pods/b0d8175e-d7c0-49e6-bce1-770d2dac9b74/volumes" Feb 18 20:03:39 crc kubenswrapper[4754]: I0218 20:03:39.097261 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"672b508d-bc50-4ce2-8638-452e41f15a39","Type":"ContainerStarted","Data":"fa81e641fb6124aae369bdc70a5f4edbc96d6d42bc016915785ca4c541e062b9"} Feb 18 20:03:46 crc kubenswrapper[4754]: I0218 20:03:46.166002 4754 generic.go:334] "Generic (PLEG): container finished" podID="672b508d-bc50-4ce2-8638-452e41f15a39" containerID="fa81e641fb6124aae369bdc70a5f4edbc96d6d42bc016915785ca4c541e062b9" exitCode=0 Feb 18 20:03:46 crc kubenswrapper[4754]: I0218 20:03:46.166263 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"672b508d-bc50-4ce2-8638-452e41f15a39","Type":"ContainerDied","Data":"fa81e641fb6124aae369bdc70a5f4edbc96d6d42bc016915785ca4c541e062b9"} Feb 18 20:03:47 crc kubenswrapper[4754]: I0218 20:03:47.179866 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"672b508d-bc50-4ce2-8638-452e41f15a39","Type":"ContainerStarted","Data":"4736aada4a14ada2461310f7265fb8e1f07c8fff0f3f1cacd9b308cfee1aeb29"} Feb 18 20:03:51 crc kubenswrapper[4754]: I0218 20:03:51.222259 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"672b508d-bc50-4ce2-8638-452e41f15a39","Type":"ContainerStarted","Data":"445f97d341453ebb787fa4463c6721737f579f17c2a3be4edde1f02e9d30cff0"} Feb 18 20:03:51 crc kubenswrapper[4754]: I0218 20:03:51.222723 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"672b508d-bc50-4ce2-8638-452e41f15a39","Type":"ContainerStarted","Data":"511b35eac791f290675498983ef377701912575cbb4e94f4c209304ddc1af264"} Feb 18 20:03:51 crc kubenswrapper[4754]: I0218 20:03:51.259679 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.259662479 podStartE2EDuration="17.259662479s" podCreationTimestamp="2026-02-18 20:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:03:51.253051943 +0000 UTC m=+2733.703464779" watchObservedRunningTime="2026-02-18 20:03:51.259662479 +0000 UTC m=+2733.710075275" Feb 18 20:03:54 crc kubenswrapper[4754]: I0218 20:03:54.715734 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 20:04:04 crc kubenswrapper[4754]: I0218 20:04:04.715700 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 20:04:04 crc kubenswrapper[4754]: I0218 20:04:04.728712 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 20:04:05 crc kubenswrapper[4754]: I0218 20:04:05.390618 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.759805 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btwq6"] Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.762656 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.772974 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btwq6"] Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.889406 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-catalog-content\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.889462 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj9nc\" (UniqueName: \"kubernetes.io/projected/78f841dd-e5b5-408d-83ab-34f2a8c89b12-kube-api-access-rj9nc\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.889810 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-utilities\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.991334 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-utilities\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.991413 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-catalog-content\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.991440 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj9nc\" (UniqueName: \"kubernetes.io/projected/78f841dd-e5b5-408d-83ab-34f2a8c89b12-kube-api-access-rj9nc\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.992095 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-utilities\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:06 crc kubenswrapper[4754]: I0218 20:04:06.992231 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-catalog-content\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:07 crc kubenswrapper[4754]: I0218 20:04:07.015919 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj9nc\" (UniqueName: \"kubernetes.io/projected/78f841dd-e5b5-408d-83ab-34f2a8c89b12-kube-api-access-rj9nc\") pod \"community-operators-btwq6\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:07 crc kubenswrapper[4754]: I0218 20:04:07.092830 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:07 crc kubenswrapper[4754]: I0218 20:04:07.637213 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btwq6"] Feb 18 20:04:07 crc kubenswrapper[4754]: W0218 20:04:07.637912 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f841dd_e5b5_408d_83ab_34f2a8c89b12.slice/crio-9a76d37b432fbc96a9ba8cc7cbc758b6849a22943ef2d8ff8e1854b712b5bc23 WatchSource:0}: Error finding container 9a76d37b432fbc96a9ba8cc7cbc758b6849a22943ef2d8ff8e1854b712b5bc23: Status 404 returned error can't find the container with id 9a76d37b432fbc96a9ba8cc7cbc758b6849a22943ef2d8ff8e1854b712b5bc23 Feb 18 20:04:08 crc kubenswrapper[4754]: I0218 20:04:08.410898 4754 generic.go:334] "Generic (PLEG): container finished" podID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerID="387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400" exitCode=0 Feb 18 20:04:08 crc kubenswrapper[4754]: I0218 20:04:08.413475 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerDied","Data":"387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400"} Feb 18 20:04:08 crc kubenswrapper[4754]: I0218 20:04:08.416204 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerStarted","Data":"9a76d37b432fbc96a9ba8cc7cbc758b6849a22943ef2d8ff8e1854b712b5bc23"} Feb 18 20:04:08 crc kubenswrapper[4754]: I0218 20:04:08.414542 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:04:09 crc kubenswrapper[4754]: I0218 20:04:09.426860 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerStarted","Data":"955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482"} Feb 18 20:04:10 crc kubenswrapper[4754]: I0218 20:04:10.457262 4754 generic.go:334] "Generic (PLEG): container finished" podID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerID="955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482" exitCode=0 Feb 18 20:04:10 crc kubenswrapper[4754]: I0218 20:04:10.457321 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerDied","Data":"955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482"} Feb 18 20:04:11 crc kubenswrapper[4754]: I0218 20:04:11.471504 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerStarted","Data":"a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43"} Feb 18 20:04:11 crc kubenswrapper[4754]: I0218 20:04:11.491486 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btwq6" podStartSLOduration=3.048098341 podStartE2EDuration="5.491458808s" podCreationTimestamp="2026-02-18 20:04:06 +0000 UTC" firstStartedPulling="2026-02-18 20:04:08.414213665 +0000 UTC m=+2750.864626451" lastFinishedPulling="2026-02-18 20:04:10.857574082 +0000 UTC m=+2753.307986918" observedRunningTime="2026-02-18 20:04:11.491341774 +0000 UTC m=+2753.941754590" watchObservedRunningTime="2026-02-18 20:04:11.491458808 +0000 UTC m=+2753.941871644" Feb 18 20:04:17 crc kubenswrapper[4754]: I0218 20:04:17.094633 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:17 crc kubenswrapper[4754]: I0218 20:04:17.095273 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:17 crc kubenswrapper[4754]: I0218 20:04:17.169966 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:17 crc kubenswrapper[4754]: I0218 20:04:17.605038 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:17 crc kubenswrapper[4754]: I0218 20:04:17.697090 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btwq6"] Feb 18 20:04:19 crc kubenswrapper[4754]: I0218 20:04:19.562753 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btwq6" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="registry-server" containerID="cri-o://a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43" gracePeriod=2 Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.069688 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.158614 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-catalog-content\") pod \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.158809 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-utilities\") pod \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.158935 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj9nc\" (UniqueName: \"kubernetes.io/projected/78f841dd-e5b5-408d-83ab-34f2a8c89b12-kube-api-access-rj9nc\") pod \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\" (UID: \"78f841dd-e5b5-408d-83ab-34f2a8c89b12\") " Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.159626 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-utilities" (OuterVolumeSpecName: "utilities") pod "78f841dd-e5b5-408d-83ab-34f2a8c89b12" (UID: "78f841dd-e5b5-408d-83ab-34f2a8c89b12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.159997 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.165502 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f841dd-e5b5-408d-83ab-34f2a8c89b12-kube-api-access-rj9nc" (OuterVolumeSpecName: "kube-api-access-rj9nc") pod "78f841dd-e5b5-408d-83ab-34f2a8c89b12" (UID: "78f841dd-e5b5-408d-83ab-34f2a8c89b12"). InnerVolumeSpecName "kube-api-access-rj9nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.214802 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78f841dd-e5b5-408d-83ab-34f2a8c89b12" (UID: "78f841dd-e5b5-408d-83ab-34f2a8c89b12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.262105 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f841dd-e5b5-408d-83ab-34f2a8c89b12-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.262235 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj9nc\" (UniqueName: \"kubernetes.io/projected/78f841dd-e5b5-408d-83ab-34f2a8c89b12-kube-api-access-rj9nc\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.573088 4754 generic.go:334] "Generic (PLEG): container finished" podID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerID="a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43" exitCode=0 Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.573128 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerDied","Data":"a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43"} Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.573175 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btwq6" event={"ID":"78f841dd-e5b5-408d-83ab-34f2a8c89b12","Type":"ContainerDied","Data":"9a76d37b432fbc96a9ba8cc7cbc758b6849a22943ef2d8ff8e1854b712b5bc23"} Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.573192 4754 scope.go:117] "RemoveContainer" containerID="a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.573215 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btwq6" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.600702 4754 scope.go:117] "RemoveContainer" containerID="955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.611857 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btwq6"] Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.623661 4754 scope.go:117] "RemoveContainer" containerID="387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.627032 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btwq6"] Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.687530 4754 scope.go:117] "RemoveContainer" containerID="a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43" Feb 18 20:04:20 crc kubenswrapper[4754]: E0218 20:04:20.687990 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43\": container with ID starting with a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43 not found: ID does not exist" containerID="a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.688052 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43"} err="failed to get container status \"a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43\": rpc error: code = NotFound desc = could not find container \"a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43\": container with ID starting with a3c26e6e2246c29430d999860b548e9cf5c6468f9a5a8e1167a07087a4613a43 not found: ID does not exist" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.688086 4754 scope.go:117] "RemoveContainer" containerID="955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482" Feb 18 20:04:20 crc kubenswrapper[4754]: E0218 20:04:20.688490 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482\": container with ID starting with 955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482 not found: ID does not exist" containerID="955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.688533 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482"} err="failed to get container status \"955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482\": rpc error: code = NotFound desc = could not find container \"955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482\": container with ID starting with 955fcebca024e3320c1a839b6d76b08ae2731cd76de19fddb81f92664f8bc482 not found: ID does not exist" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.688563 4754 scope.go:117] "RemoveContainer" containerID="387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400" Feb 18 20:04:20 crc kubenswrapper[4754]: E0218 20:04:20.688827 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400\": container with ID starting with 387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400 not found: ID does not exist" containerID="387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400" Feb 18 20:04:20 crc kubenswrapper[4754]: I0218 20:04:20.688859 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400"} err="failed to get container status \"387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400\": rpc error: code = NotFound desc = could not find container \"387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400\": container with ID starting with 387ff3b475206e9362ea101b31c7176a2f1b393f36f35ca43e0f3c475bbbb400 not found: ID does not exist" Feb 18 20:04:22 crc kubenswrapper[4754]: I0218 20:04:22.220378 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" path="/var/lib/kubelet/pods/78f841dd-e5b5-408d-83ab-34f2a8c89b12/volumes" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.835456 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:04:28 crc kubenswrapper[4754]: E0218 20:04:28.836459 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="registry-server" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.836563 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="registry-server" Feb 18 20:04:28 crc kubenswrapper[4754]: E0218 20:04:28.836605 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="extract-utilities" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.836613 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="extract-utilities" Feb 18 20:04:28 crc kubenswrapper[4754]: E0218 20:04:28.836632 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="extract-content" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.836640 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="extract-content" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.836893 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f841dd-e5b5-408d-83ab-34f2a8c89b12" containerName="registry-server" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.838617 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.841654 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mrfb5" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.841741 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.841807 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.852065 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.865541 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962059 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-config-data\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962164 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962217 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962270 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85wxt\" (UniqueName: \"kubernetes.io/projected/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-kube-api-access-85wxt\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962326 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962502 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962557 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962746 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:28 crc kubenswrapper[4754]: I0218 20:04:28.962854 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064253 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064373 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064447 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064527 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-config-data\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064617 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064679 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85wxt\" (UniqueName: \"kubernetes.io/projected/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-kube-api-access-85wxt\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064772 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064874 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.064882 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.065669 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.065824 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.065826 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.067178 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-config-data\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.072251 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.074103 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.075040 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.096291 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85wxt\" (UniqueName: \"kubernetes.io/projected/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-kube-api-access-85wxt\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.102532 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.164383 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:04:29 crc kubenswrapper[4754]: I0218 20:04:29.728507 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:04:30 crc kubenswrapper[4754]: I0218 20:04:30.695112 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1","Type":"ContainerStarted","Data":"1e012fd1949b4b411202d8776015968e3ac3211911a5a34ece7bec78bdf38ce5"} Feb 18 20:04:40 crc kubenswrapper[4754]: I0218 20:04:40.814418 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1","Type":"ContainerStarted","Data":"b71feadadccecf3df28b71573caea88f0c868aec59a06cf3b9ba391616c3ec34"} Feb 18 20:04:40 crc kubenswrapper[4754]: I0218 20:04:40.834130 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.149991224 podStartE2EDuration="13.83411123s" podCreationTimestamp="2026-02-18 20:04:27 +0000 UTC" firstStartedPulling="2026-02-18 20:04:29.731990468 +0000 UTC m=+2772.182403264" lastFinishedPulling="2026-02-18 20:04:39.416110474 +0000 UTC m=+2781.866523270" observedRunningTime="2026-02-18 20:04:40.830837828 +0000 UTC m=+2783.281250624" watchObservedRunningTime="2026-02-18 20:04:40.83411123 +0000 UTC m=+2783.284524026" Feb 18 20:05:38 crc kubenswrapper[4754]: I0218 20:05:38.096438 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:05:38 crc kubenswrapper[4754]: I0218 20:05:38.099370 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.260564 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r8q7f"] Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.268500 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.314026 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8q7f"] Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.374735 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-utilities\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.374852 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-catalog-content\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.374882 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9t8z\" (UniqueName: \"kubernetes.io/projected/98243f96-3a90-4492-9732-16184172c600-kube-api-access-d9t8z\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.476231 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-utilities\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.476758 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-utilities\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.477724 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-catalog-content\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.477839 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9t8z\" (UniqueName: \"kubernetes.io/projected/98243f96-3a90-4492-9732-16184172c600-kube-api-access-d9t8z\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.478067 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-catalog-content\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.495964 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9t8z\" (UniqueName: \"kubernetes.io/projected/98243f96-3a90-4492-9732-16184172c600-kube-api-access-d9t8z\") pod \"redhat-marketplace-r8q7f\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:06 crc kubenswrapper[4754]: I0218 20:06:06.604232 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:07 crc kubenswrapper[4754]: I0218 20:06:07.072080 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8q7f"] Feb 18 20:06:07 crc kubenswrapper[4754]: I0218 20:06:07.733988 4754 generic.go:334] "Generic (PLEG): container finished" podID="98243f96-3a90-4492-9732-16184172c600" containerID="f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72" exitCode=0 Feb 18 20:06:07 crc kubenswrapper[4754]: I0218 20:06:07.734086 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerDied","Data":"f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72"} Feb 18 20:06:07 crc kubenswrapper[4754]: I0218 20:06:07.734335 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerStarted","Data":"82de8418df2677080c305f7c7b9d81ef96398af8585f326885a83778d28a27c0"} Feb 18 20:06:08 crc kubenswrapper[4754]: I0218 20:06:08.096981 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:06:08 crc kubenswrapper[4754]: I0218 20:06:08.097115 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:06:08 crc kubenswrapper[4754]: I0218 20:06:08.754696 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerStarted","Data":"72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37"} Feb 18 20:06:09 crc kubenswrapper[4754]: I0218 20:06:09.776129 4754 generic.go:334] "Generic (PLEG): container finished" podID="98243f96-3a90-4492-9732-16184172c600" containerID="72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37" exitCode=0 Feb 18 20:06:09 crc kubenswrapper[4754]: I0218 20:06:09.776557 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerDied","Data":"72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37"} Feb 18 20:06:10 crc kubenswrapper[4754]: I0218 20:06:10.789035 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerStarted","Data":"b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731"} Feb 18 20:06:10 crc kubenswrapper[4754]: I0218 20:06:10.813949 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r8q7f" podStartSLOduration=2.381139124 podStartE2EDuration="4.813927232s" podCreationTimestamp="2026-02-18 20:06:06 +0000 UTC" firstStartedPulling="2026-02-18 20:06:07.735820437 +0000 UTC m=+2870.186233233" lastFinishedPulling="2026-02-18 20:06:10.168608525 +0000 UTC m=+2872.619021341" observedRunningTime="2026-02-18 20:06:10.807607285 +0000 UTC m=+2873.258020101" watchObservedRunningTime="2026-02-18 20:06:10.813927232 +0000 UTC m=+2873.264340028" Feb 18 20:06:16 crc kubenswrapper[4754]: I0218 20:06:16.605173 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:16 crc kubenswrapper[4754]: I0218 20:06:16.605730 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:16 crc kubenswrapper[4754]: I0218 20:06:16.656523 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:16 crc kubenswrapper[4754]: I0218 20:06:16.902946 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:16 crc kubenswrapper[4754]: I0218 20:06:16.948252 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8q7f"] Feb 18 20:06:18 crc kubenswrapper[4754]: I0218 20:06:18.868323 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r8q7f" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="registry-server" containerID="cri-o://b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731" gracePeriod=2 Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.372947 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.435745 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9t8z\" (UniqueName: \"kubernetes.io/projected/98243f96-3a90-4492-9732-16184172c600-kube-api-access-d9t8z\") pod \"98243f96-3a90-4492-9732-16184172c600\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.435824 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-utilities\") pod \"98243f96-3a90-4492-9732-16184172c600\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.436045 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-catalog-content\") pod \"98243f96-3a90-4492-9732-16184172c600\" (UID: \"98243f96-3a90-4492-9732-16184172c600\") " Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.437009 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-utilities" (OuterVolumeSpecName: "utilities") pod "98243f96-3a90-4492-9732-16184172c600" (UID: "98243f96-3a90-4492-9732-16184172c600"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.443689 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98243f96-3a90-4492-9732-16184172c600-kube-api-access-d9t8z" (OuterVolumeSpecName: "kube-api-access-d9t8z") pod "98243f96-3a90-4492-9732-16184172c600" (UID: "98243f96-3a90-4492-9732-16184172c600"). InnerVolumeSpecName "kube-api-access-d9t8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.464511 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98243f96-3a90-4492-9732-16184172c600" (UID: "98243f96-3a90-4492-9732-16184172c600"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.538895 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.538925 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9t8z\" (UniqueName: \"kubernetes.io/projected/98243f96-3a90-4492-9732-16184172c600-kube-api-access-d9t8z\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.539046 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98243f96-3a90-4492-9732-16184172c600-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.885982 4754 generic.go:334] "Generic (PLEG): container finished" podID="98243f96-3a90-4492-9732-16184172c600" containerID="b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731" exitCode=0 Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.886038 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8q7f" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.886033 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerDied","Data":"b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731"} Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.886376 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8q7f" event={"ID":"98243f96-3a90-4492-9732-16184172c600","Type":"ContainerDied","Data":"82de8418df2677080c305f7c7b9d81ef96398af8585f326885a83778d28a27c0"} Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.886394 4754 scope.go:117] "RemoveContainer" containerID="b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.912334 4754 scope.go:117] "RemoveContainer" containerID="72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.923922 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8q7f"] Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.934882 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8q7f"] Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.959120 4754 scope.go:117] "RemoveContainer" containerID="f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.983660 4754 scope.go:117] "RemoveContainer" containerID="b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731" Feb 18 20:06:19 crc kubenswrapper[4754]: E0218 20:06:19.984034 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731\": container with ID starting with b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731 not found: ID does not exist" containerID="b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.984065 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731"} err="failed to get container status \"b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731\": rpc error: code = NotFound desc = could not find container \"b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731\": container with ID starting with b45555865338c8110ebd89a1cb3db669781caf867d1f4c6d6899e28a20e4e731 not found: ID does not exist" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.984086 4754 scope.go:117] "RemoveContainer" containerID="72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37" Feb 18 20:06:19 crc kubenswrapper[4754]: E0218 20:06:19.984412 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37\": container with ID starting with 72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37 not found: ID does not exist" containerID="72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.984430 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37"} err="failed to get container status \"72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37\": rpc error: code = NotFound desc = could not find container \"72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37\": container with ID starting with 72b0532e352ea068fb3b93fabd48677e20167932b5d9b842020eb57c382a3f37 not found: ID does not exist" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.984442 4754 scope.go:117] "RemoveContainer" containerID="f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72" Feb 18 20:06:19 crc kubenswrapper[4754]: E0218 20:06:19.984885 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72\": container with ID starting with f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72 not found: ID does not exist" containerID="f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72" Feb 18 20:06:19 crc kubenswrapper[4754]: I0218 20:06:19.984902 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72"} err="failed to get container status \"f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72\": rpc error: code = NotFound desc = could not find container \"f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72\": container with ID starting with f9a14755f94aa7438c4e8128c579a73a1ae4d1e89d23492be2b40dc818b7fe72 not found: ID does not exist" Feb 18 20:06:20 crc kubenswrapper[4754]: I0218 20:06:20.225016 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98243f96-3a90-4492-9732-16184172c600" path="/var/lib/kubelet/pods/98243f96-3a90-4492-9732-16184172c600/volumes" Feb 18 20:06:38 crc kubenswrapper[4754]: I0218 20:06:38.096804 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:06:38 crc kubenswrapper[4754]: I0218 20:06:38.097549 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:06:38 crc kubenswrapper[4754]: I0218 20:06:38.097616 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:06:38 crc kubenswrapper[4754]: I0218 20:06:38.098639 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:06:38 crc kubenswrapper[4754]: I0218 20:06:38.098740 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" gracePeriod=600 Feb 18 20:06:38 crc kubenswrapper[4754]: E0218 20:06:38.231548 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:06:39 crc kubenswrapper[4754]: I0218 20:06:39.081206 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" exitCode=0 Feb 18 20:06:39 crc kubenswrapper[4754]: I0218 20:06:39.081292 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752"} Feb 18 20:06:39 crc kubenswrapper[4754]: I0218 20:06:39.081527 4754 scope.go:117] "RemoveContainer" containerID="8ad271beffae4d53604516072d7e3753e99a9dd5613a29dc4bf6ae71a7b3a58b" Feb 18 20:06:39 crc kubenswrapper[4754]: I0218 20:06:39.082113 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:06:39 crc kubenswrapper[4754]: E0218 20:06:39.082381 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:06:49 crc kubenswrapper[4754]: I0218 20:06:49.209751 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:06:49 crc kubenswrapper[4754]: E0218 20:06:49.210633 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:06:50 crc kubenswrapper[4754]: I0218 20:06:50.900876 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-tdmfp" podUID="57a849ad-38ab-47a3-9d27-8a09850ae75f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.70:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:06:50 crc kubenswrapper[4754]: I0218 20:06:50.903209 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-85fc6d94f-wvhnp" podUID="29ea22ea-e1bc-4c19-b3a4-f8ea5a2cdb1f" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:04 crc kubenswrapper[4754]: I0218 20:07:04.210405 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:07:04 crc kubenswrapper[4754]: E0218 20:07:04.211765 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:07:17 crc kubenswrapper[4754]: I0218 20:07:17.210858 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:07:17 crc kubenswrapper[4754]: E0218 20:07:17.212039 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:07:31 crc kubenswrapper[4754]: I0218 20:07:31.210246 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:07:31 crc kubenswrapper[4754]: E0218 20:07:31.210903 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:07:44 crc kubenswrapper[4754]: I0218 20:07:44.209972 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:07:44 crc kubenswrapper[4754]: E0218 20:07:44.211203 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:07:58 crc kubenswrapper[4754]: I0218 20:07:58.222088 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:07:58 crc kubenswrapper[4754]: E0218 20:07:58.222884 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:08:10 crc kubenswrapper[4754]: I0218 20:08:10.210177 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:08:10 crc kubenswrapper[4754]: E0218 20:08:10.211224 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:08:24 crc kubenswrapper[4754]: I0218 20:08:24.214819 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:08:24 crc kubenswrapper[4754]: E0218 20:08:24.215981 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:08:35 crc kubenswrapper[4754]: I0218 20:08:35.210993 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:08:35 crc kubenswrapper[4754]: E0218 20:08:35.212369 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:08:46 crc kubenswrapper[4754]: I0218 20:08:46.256630 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:08:46 crc kubenswrapper[4754]: E0218 20:08:46.257547 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:09:01 crc kubenswrapper[4754]: I0218 20:09:01.210578 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:09:01 crc kubenswrapper[4754]: E0218 20:09:01.211463 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:09:16 crc kubenswrapper[4754]: I0218 20:09:16.209705 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:09:16 crc kubenswrapper[4754]: E0218 20:09:16.210510 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:09:31 crc kubenswrapper[4754]: I0218 20:09:31.210658 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:09:31 crc kubenswrapper[4754]: E0218 20:09:31.211483 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:09:46 crc kubenswrapper[4754]: I0218 20:09:46.209916 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:09:46 crc kubenswrapper[4754]: E0218 20:09:46.210766 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:09:57 crc kubenswrapper[4754]: I0218 20:09:57.209930 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:09:57 crc kubenswrapper[4754]: E0218 20:09:57.210800 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:10:08 crc kubenswrapper[4754]: I0218 20:10:08.216954 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:10:08 crc kubenswrapper[4754]: E0218 20:10:08.218299 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:10:21 crc kubenswrapper[4754]: I0218 20:10:21.209698 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:10:21 crc kubenswrapper[4754]: E0218 20:10:21.210357 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:10:36 crc kubenswrapper[4754]: I0218 20:10:36.210751 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:10:36 crc kubenswrapper[4754]: E0218 20:10:36.212963 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:10:49 crc kubenswrapper[4754]: I0218 20:10:49.210711 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:10:49 crc kubenswrapper[4754]: E0218 20:10:49.211599 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:11:03 crc kubenswrapper[4754]: I0218 20:11:03.209738 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:11:03 crc kubenswrapper[4754]: E0218 20:11:03.210484 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:11:15 crc kubenswrapper[4754]: I0218 20:11:15.210135 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:11:15 crc kubenswrapper[4754]: E0218 20:11:15.211166 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:11:26 crc kubenswrapper[4754]: I0218 20:11:26.210304 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:11:26 crc kubenswrapper[4754]: E0218 20:11:26.211035 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:11:38 crc kubenswrapper[4754]: I0218 20:11:38.210960 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:11:39 crc kubenswrapper[4754]: I0218 20:11:39.103591 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"18fb9f12ddc4eddd2e154981fe0e4a5a76e41bccdbad7fd1512501a33ef4bc25"} Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.620221 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v789t"] Feb 18 20:12:08 crc kubenswrapper[4754]: E0218 20:12:08.621278 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="extract-content" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.621297 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="extract-content" Feb 18 20:12:08 crc kubenswrapper[4754]: E0218 20:12:08.621324 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="extract-utilities" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.621332 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="extract-utilities" Feb 18 20:12:08 crc kubenswrapper[4754]: E0218 20:12:08.621371 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="registry-server" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.621380 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="registry-server" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.621604 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="98243f96-3a90-4492-9732-16184172c600" containerName="registry-server" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.623685 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.631067 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v789t"] Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.812093 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nphd5\" (UniqueName: \"kubernetes.io/projected/8063d149-14e5-4e19-805b-cc4368746566-kube-api-access-nphd5\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.813057 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-utilities\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.813259 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-catalog-content\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.915197 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nphd5\" (UniqueName: \"kubernetes.io/projected/8063d149-14e5-4e19-805b-cc4368746566-kube-api-access-nphd5\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.915368 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-utilities\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.915454 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-catalog-content\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.916045 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-utilities\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.916560 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-catalog-content\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.942077 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nphd5\" (UniqueName: \"kubernetes.io/projected/8063d149-14e5-4e19-805b-cc4368746566-kube-api-access-nphd5\") pod \"certified-operators-v789t\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:08 crc kubenswrapper[4754]: I0218 20:12:08.956179 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:09 crc kubenswrapper[4754]: I0218 20:12:09.516201 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v789t"] Feb 18 20:12:10 crc kubenswrapper[4754]: I0218 20:12:10.490693 4754 generic.go:334] "Generic (PLEG): container finished" podID="8063d149-14e5-4e19-805b-cc4368746566" containerID="e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb" exitCode=0 Feb 18 20:12:10 crc kubenswrapper[4754]: I0218 20:12:10.490822 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerDied","Data":"e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb"} Feb 18 20:12:10 crc kubenswrapper[4754]: I0218 20:12:10.491027 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerStarted","Data":"7e3020e86ddd780ee1b3b2f7e0982a6b9d01688496042b457029262597031a76"} Feb 18 20:12:10 crc kubenswrapper[4754]: I0218 20:12:10.493471 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:12:11 crc kubenswrapper[4754]: I0218 20:12:11.499571 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerStarted","Data":"d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d"} Feb 18 20:12:13 crc kubenswrapper[4754]: I0218 20:12:13.520432 4754 generic.go:334] "Generic (PLEG): container finished" podID="8063d149-14e5-4e19-805b-cc4368746566" containerID="d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d" exitCode=0 Feb 18 20:12:13 crc kubenswrapper[4754]: I0218 20:12:13.520535 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerDied","Data":"d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d"} Feb 18 20:12:14 crc kubenswrapper[4754]: I0218 20:12:14.541538 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerStarted","Data":"828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae"} Feb 18 20:12:18 crc kubenswrapper[4754]: I0218 20:12:18.957325 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:18 crc kubenswrapper[4754]: I0218 20:12:18.958186 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:20 crc kubenswrapper[4754]: I0218 20:12:20.010290 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v789t" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="registry-server" probeResult="failure" output=< Feb 18 20:12:20 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 20:12:20 crc kubenswrapper[4754]: > Feb 18 20:12:29 crc kubenswrapper[4754]: I0218 20:12:29.022070 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:29 crc kubenswrapper[4754]: I0218 20:12:29.047660 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v789t" podStartSLOduration=17.585617662 podStartE2EDuration="21.047616576s" podCreationTimestamp="2026-02-18 20:12:08 +0000 UTC" firstStartedPulling="2026-02-18 20:12:10.49319037 +0000 UTC m=+3232.943603176" lastFinishedPulling="2026-02-18 20:12:13.955189294 +0000 UTC m=+3236.405602090" observedRunningTime="2026-02-18 20:12:14.560776397 +0000 UTC m=+3237.011189213" watchObservedRunningTime="2026-02-18 20:12:29.047616576 +0000 UTC m=+3251.498029382" Feb 18 20:12:29 crc kubenswrapper[4754]: I0218 20:12:29.088840 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:29 crc kubenswrapper[4754]: I0218 20:12:29.261154 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v789t"] Feb 18 20:12:30 crc kubenswrapper[4754]: I0218 20:12:30.694240 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v789t" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="registry-server" containerID="cri-o://828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae" gracePeriod=2 Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.212784 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.266520 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-catalog-content\") pod \"8063d149-14e5-4e19-805b-cc4368746566\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.266743 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-utilities\") pod \"8063d149-14e5-4e19-805b-cc4368746566\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.266869 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nphd5\" (UniqueName: \"kubernetes.io/projected/8063d149-14e5-4e19-805b-cc4368746566-kube-api-access-nphd5\") pod \"8063d149-14e5-4e19-805b-cc4368746566\" (UID: \"8063d149-14e5-4e19-805b-cc4368746566\") " Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.267657 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-utilities" (OuterVolumeSpecName: "utilities") pod "8063d149-14e5-4e19-805b-cc4368746566" (UID: "8063d149-14e5-4e19-805b-cc4368746566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.273223 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8063d149-14e5-4e19-805b-cc4368746566-kube-api-access-nphd5" (OuterVolumeSpecName: "kube-api-access-nphd5") pod "8063d149-14e5-4e19-805b-cc4368746566" (UID: "8063d149-14e5-4e19-805b-cc4368746566"). InnerVolumeSpecName "kube-api-access-nphd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.332851 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8063d149-14e5-4e19-805b-cc4368746566" (UID: "8063d149-14e5-4e19-805b-cc4368746566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.368844 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nphd5\" (UniqueName: \"kubernetes.io/projected/8063d149-14e5-4e19-805b-cc4368746566-kube-api-access-nphd5\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.368882 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.368892 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8063d149-14e5-4e19-805b-cc4368746566-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.707254 4754 generic.go:334] "Generic (PLEG): container finished" podID="8063d149-14e5-4e19-805b-cc4368746566" containerID="828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae" exitCode=0 Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.707319 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerDied","Data":"828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae"} Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.707572 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v789t" event={"ID":"8063d149-14e5-4e19-805b-cc4368746566","Type":"ContainerDied","Data":"7e3020e86ddd780ee1b3b2f7e0982a6b9d01688496042b457029262597031a76"} Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.707595 4754 scope.go:117] "RemoveContainer" containerID="828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.707388 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v789t" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.741755 4754 scope.go:117] "RemoveContainer" containerID="d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.757512 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v789t"] Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.767950 4754 scope.go:117] "RemoveContainer" containerID="e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.768050 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v789t"] Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.831158 4754 scope.go:117] "RemoveContainer" containerID="828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae" Feb 18 20:12:31 crc kubenswrapper[4754]: E0218 20:12:31.831581 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae\": container with ID starting with 828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae not found: ID does not exist" containerID="828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.831616 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae"} err="failed to get container status \"828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae\": rpc error: code = NotFound desc = could not find container \"828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae\": container with ID starting with 828e49fa8b72265d04c6baaa589984e164ddd6c4b760c102142e13076c283eae not found: ID does not exist" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.831641 4754 scope.go:117] "RemoveContainer" containerID="d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d" Feb 18 20:12:31 crc kubenswrapper[4754]: E0218 20:12:31.831878 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d\": container with ID starting with d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d not found: ID does not exist" containerID="d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.831919 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d"} err="failed to get container status \"d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d\": rpc error: code = NotFound desc = could not find container \"d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d\": container with ID starting with d963e82595f92b2503f73687b221c913ffb827131332357a5e44deece063b50d not found: ID does not exist" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.831948 4754 scope.go:117] "RemoveContainer" containerID="e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb" Feb 18 20:12:31 crc kubenswrapper[4754]: E0218 20:12:31.832367 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb\": container with ID starting with e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb not found: ID does not exist" containerID="e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb" Feb 18 20:12:31 crc kubenswrapper[4754]: I0218 20:12:31.832395 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb"} err="failed to get container status \"e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb\": rpc error: code = NotFound desc = could not find container \"e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb\": container with ID starting with e20c6fe5740ed70e3782d6bf307ca92cb0327d5a84f7feed73cab1453fbdf4eb not found: ID does not exist" Feb 18 20:12:32 crc kubenswrapper[4754]: I0218 20:12:32.231436 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8063d149-14e5-4e19-805b-cc4368746566" path="/var/lib/kubelet/pods/8063d149-14e5-4e19-805b-cc4368746566/volumes" Feb 18 20:13:38 crc kubenswrapper[4754]: I0218 20:13:38.096868 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:13:38 crc kubenswrapper[4754]: I0218 20:13:38.097601 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:14:08 crc kubenswrapper[4754]: I0218 20:14:08.096749 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:14:08 crc kubenswrapper[4754]: I0218 20:14:08.097597 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:14:38 crc kubenswrapper[4754]: I0218 20:14:38.096716 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:14:38 crc kubenswrapper[4754]: I0218 20:14:38.097260 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:14:38 crc kubenswrapper[4754]: I0218 20:14:38.097300 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:14:38 crc kubenswrapper[4754]: I0218 20:14:38.097757 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18fb9f12ddc4eddd2e154981fe0e4a5a76e41bccdbad7fd1512501a33ef4bc25"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:14:38 crc kubenswrapper[4754]: I0218 20:14:38.097808 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://18fb9f12ddc4eddd2e154981fe0e4a5a76e41bccdbad7fd1512501a33ef4bc25" gracePeriod=600 Feb 18 20:14:39 crc kubenswrapper[4754]: I0218 20:14:38.999927 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="18fb9f12ddc4eddd2e154981fe0e4a5a76e41bccdbad7fd1512501a33ef4bc25" exitCode=0 Feb 18 20:14:39 crc kubenswrapper[4754]: I0218 20:14:38.999993 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"18fb9f12ddc4eddd2e154981fe0e4a5a76e41bccdbad7fd1512501a33ef4bc25"} Feb 18 20:14:39 crc kubenswrapper[4754]: I0218 20:14:39.000499 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8"} Feb 18 20:14:39 crc kubenswrapper[4754]: I0218 20:14:39.000518 4754 scope.go:117] "RemoveContainer" containerID="a12c11210b8d509e03e27e9c9d8da6f1da2bd11e775bd0627b834d56a555c752" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.162582 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x"] Feb 18 20:15:00 crc kubenswrapper[4754]: E0218 20:15:00.163453 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="extract-utilities" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.163466 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="extract-utilities" Feb 18 20:15:00 crc kubenswrapper[4754]: E0218 20:15:00.163487 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="registry-server" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.163493 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="registry-server" Feb 18 20:15:00 crc kubenswrapper[4754]: E0218 20:15:00.163513 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="extract-content" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.163519 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="extract-content" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.163683 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8063d149-14e5-4e19-805b-cc4368746566" containerName="registry-server" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.164497 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.167325 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.167598 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.175913 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x"] Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.344005 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-secret-volume\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.344093 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-config-volume\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.344175 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25v7\" (UniqueName: \"kubernetes.io/projected/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-kube-api-access-x25v7\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.446464 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-secret-volume\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.446559 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-config-volume\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.446627 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25v7\" (UniqueName: \"kubernetes.io/projected/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-kube-api-access-x25v7\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.448496 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-config-volume\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.460219 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-secret-volume\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.465324 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25v7\" (UniqueName: \"kubernetes.io/projected/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-kube-api-access-x25v7\") pod \"collect-profiles-29524095-r294x\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.507463 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:00 crc kubenswrapper[4754]: I0218 20:15:00.951654 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x"] Feb 18 20:15:01 crc kubenswrapper[4754]: I0218 20:15:01.404856 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" event={"ID":"b7923d5b-b4ca-4723-b770-4ff4ba5de03e","Type":"ContainerStarted","Data":"56fdf1ab5d16264659c4c815d538ce7204182be4ae088ce50b0a217e7c457e26"} Feb 18 20:15:01 crc kubenswrapper[4754]: I0218 20:15:01.405179 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" event={"ID":"b7923d5b-b4ca-4723-b770-4ff4ba5de03e","Type":"ContainerStarted","Data":"ca191c2d0eed8d3590b017c2aa0e4dffbc0c2c2341b0f1d954ae963e5a139a00"} Feb 18 20:15:01 crc kubenswrapper[4754]: I0218 20:15:01.426694 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" podStartSLOduration=1.426674414 podStartE2EDuration="1.426674414s" podCreationTimestamp="2026-02-18 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:15:01.418238272 +0000 UTC m=+3403.868651068" watchObservedRunningTime="2026-02-18 20:15:01.426674414 +0000 UTC m=+3403.877087220" Feb 18 20:15:02 crc kubenswrapper[4754]: I0218 20:15:02.418843 4754 generic.go:334] "Generic (PLEG): container finished" podID="b7923d5b-b4ca-4723-b770-4ff4ba5de03e" containerID="56fdf1ab5d16264659c4c815d538ce7204182be4ae088ce50b0a217e7c457e26" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[4754]: I0218 20:15:02.418905 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" event={"ID":"b7923d5b-b4ca-4723-b770-4ff4ba5de03e","Type":"ContainerDied","Data":"56fdf1ab5d16264659c4c815d538ce7204182be4ae088ce50b0a217e7c457e26"} Feb 18 20:15:03 crc kubenswrapper[4754]: I0218 20:15:03.866598 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.009837 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-secret-volume\") pod \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.010087 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-config-volume\") pod \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.010249 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x25v7\" (UniqueName: \"kubernetes.io/projected/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-kube-api-access-x25v7\") pod \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\" (UID: \"b7923d5b-b4ca-4723-b770-4ff4ba5de03e\") " Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.011227 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7923d5b-b4ca-4723-b770-4ff4ba5de03e" (UID: "b7923d5b-b4ca-4723-b770-4ff4ba5de03e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.015990 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7923d5b-b4ca-4723-b770-4ff4ba5de03e" (UID: "b7923d5b-b4ca-4723-b770-4ff4ba5de03e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.019378 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-kube-api-access-x25v7" (OuterVolumeSpecName: "kube-api-access-x25v7") pod "b7923d5b-b4ca-4723-b770-4ff4ba5de03e" (UID: "b7923d5b-b4ca-4723-b770-4ff4ba5de03e"). InnerVolumeSpecName "kube-api-access-x25v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.129922 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x25v7\" (UniqueName: \"kubernetes.io/projected/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-kube-api-access-x25v7\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.130298 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.130311 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7923d5b-b4ca-4723-b770-4ff4ba5de03e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.440592 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" event={"ID":"b7923d5b-b4ca-4723-b770-4ff4ba5de03e","Type":"ContainerDied","Data":"ca191c2d0eed8d3590b017c2aa0e4dffbc0c2c2341b0f1d954ae963e5a139a00"} Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.440638 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca191c2d0eed8d3590b017c2aa0e4dffbc0c2c2341b0f1d954ae963e5a139a00" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.440665 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-r294x" Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.517483 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6"] Feb 18 20:15:04 crc kubenswrapper[4754]: I0218 20:15:04.529494 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-tsfz6"] Feb 18 20:15:06 crc kubenswrapper[4754]: I0218 20:15:06.230441 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f749c4-d445-4f49-a8da-48cf3519f172" path="/var/lib/kubelet/pods/03f749c4-d445-4f49-a8da-48cf3519f172/volumes" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.484638 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9nlk7"] Feb 18 20:15:09 crc kubenswrapper[4754]: E0218 20:15:09.486866 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7923d5b-b4ca-4723-b770-4ff4ba5de03e" containerName="collect-profiles" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.486994 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7923d5b-b4ca-4723-b770-4ff4ba5de03e" containerName="collect-profiles" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.487364 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7923d5b-b4ca-4723-b770-4ff4ba5de03e" containerName="collect-profiles" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.489590 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.499979 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nlk7"] Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.549878 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnrr\" (UniqueName: \"kubernetes.io/projected/c5c5277a-2b23-4626-8c8e-52fc4905971f-kube-api-access-jvnrr\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.549967 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-utilities\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.549999 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-catalog-content\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.651108 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-utilities\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.651178 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-catalog-content\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.651317 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnrr\" (UniqueName: \"kubernetes.io/projected/c5c5277a-2b23-4626-8c8e-52fc4905971f-kube-api-access-jvnrr\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.651926 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-utilities\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.651990 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-catalog-content\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.672082 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnrr\" (UniqueName: \"kubernetes.io/projected/c5c5277a-2b23-4626-8c8e-52fc4905971f-kube-api-access-jvnrr\") pod \"community-operators-9nlk7\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:09 crc kubenswrapper[4754]: I0218 20:15:09.815747 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:10 crc kubenswrapper[4754]: I0218 20:15:10.332184 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nlk7"] Feb 18 20:15:10 crc kubenswrapper[4754]: W0218 20:15:10.355026 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5c5277a_2b23_4626_8c8e_52fc4905971f.slice/crio-6e6657e5c5e35e183b16d92392d8ca470bbfd2aa8d8ddf713312c662a19a1fde WatchSource:0}: Error finding container 6e6657e5c5e35e183b16d92392d8ca470bbfd2aa8d8ddf713312c662a19a1fde: Status 404 returned error can't find the container with id 6e6657e5c5e35e183b16d92392d8ca470bbfd2aa8d8ddf713312c662a19a1fde Feb 18 20:15:10 crc kubenswrapper[4754]: I0218 20:15:10.510562 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerStarted","Data":"6e6657e5c5e35e183b16d92392d8ca470bbfd2aa8d8ddf713312c662a19a1fde"} Feb 18 20:15:11 crc kubenswrapper[4754]: I0218 20:15:11.523580 4754 generic.go:334] "Generic (PLEG): container finished" podID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerID="66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e" exitCode=0 Feb 18 20:15:11 crc kubenswrapper[4754]: I0218 20:15:11.523903 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerDied","Data":"66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e"} Feb 18 20:15:12 crc kubenswrapper[4754]: I0218 20:15:12.534595 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerStarted","Data":"2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76"} Feb 18 20:15:14 crc kubenswrapper[4754]: I0218 20:15:14.559887 4754 generic.go:334] "Generic (PLEG): container finished" podID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerID="2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76" exitCode=0 Feb 18 20:15:14 crc kubenswrapper[4754]: I0218 20:15:14.559984 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerDied","Data":"2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76"} Feb 18 20:15:16 crc kubenswrapper[4754]: I0218 20:15:16.585001 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerStarted","Data":"00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514"} Feb 18 20:15:16 crc kubenswrapper[4754]: I0218 20:15:16.610262 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9nlk7" podStartSLOduration=3.629470616 podStartE2EDuration="7.610237959s" podCreationTimestamp="2026-02-18 20:15:09 +0000 UTC" firstStartedPulling="2026-02-18 20:15:11.526312143 +0000 UTC m=+3413.976724979" lastFinishedPulling="2026-02-18 20:15:15.507079496 +0000 UTC m=+3417.957492322" observedRunningTime="2026-02-18 20:15:16.603714735 +0000 UTC m=+3419.054127541" watchObservedRunningTime="2026-02-18 20:15:16.610237959 +0000 UTC m=+3419.060650785" Feb 18 20:15:19 crc kubenswrapper[4754]: I0218 20:15:19.816666 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:19 crc kubenswrapper[4754]: I0218 20:15:19.817035 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:19 crc kubenswrapper[4754]: I0218 20:15:19.886952 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:20 crc kubenswrapper[4754]: I0218 20:15:20.675844 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:20 crc kubenswrapper[4754]: I0218 20:15:20.724873 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nlk7"] Feb 18 20:15:22 crc kubenswrapper[4754]: I0218 20:15:22.646751 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9nlk7" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="registry-server" containerID="cri-o://00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514" gracePeriod=2 Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.209981 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.335339 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-catalog-content\") pod \"c5c5277a-2b23-4626-8c8e-52fc4905971f\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.335487 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-utilities\") pod \"c5c5277a-2b23-4626-8c8e-52fc4905971f\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.335605 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvnrr\" (UniqueName: \"kubernetes.io/projected/c5c5277a-2b23-4626-8c8e-52fc4905971f-kube-api-access-jvnrr\") pod \"c5c5277a-2b23-4626-8c8e-52fc4905971f\" (UID: \"c5c5277a-2b23-4626-8c8e-52fc4905971f\") " Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.336861 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-utilities" (OuterVolumeSpecName: "utilities") pod "c5c5277a-2b23-4626-8c8e-52fc4905971f" (UID: "c5c5277a-2b23-4626-8c8e-52fc4905971f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.345315 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c5277a-2b23-4626-8c8e-52fc4905971f-kube-api-access-jvnrr" (OuterVolumeSpecName: "kube-api-access-jvnrr") pod "c5c5277a-2b23-4626-8c8e-52fc4905971f" (UID: "c5c5277a-2b23-4626-8c8e-52fc4905971f"). InnerVolumeSpecName "kube-api-access-jvnrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.385386 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5c5277a-2b23-4626-8c8e-52fc4905971f" (UID: "c5c5277a-2b23-4626-8c8e-52fc4905971f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.437513 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvnrr\" (UniqueName: \"kubernetes.io/projected/c5c5277a-2b23-4626-8c8e-52fc4905971f-kube-api-access-jvnrr\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.437545 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.437554 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5c5277a-2b23-4626-8c8e-52fc4905971f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.658628 4754 generic.go:334] "Generic (PLEG): container finished" podID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerID="00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514" exitCode=0 Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.658680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerDied","Data":"00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514"} Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.658699 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlk7" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.658735 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlk7" event={"ID":"c5c5277a-2b23-4626-8c8e-52fc4905971f","Type":"ContainerDied","Data":"6e6657e5c5e35e183b16d92392d8ca470bbfd2aa8d8ddf713312c662a19a1fde"} Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.658758 4754 scope.go:117] "RemoveContainer" containerID="00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.683117 4754 scope.go:117] "RemoveContainer" containerID="2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.702546 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nlk7"] Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.711652 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9nlk7"] Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.728083 4754 scope.go:117] "RemoveContainer" containerID="66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.751733 4754 scope.go:117] "RemoveContainer" containerID="00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514" Feb 18 20:15:23 crc kubenswrapper[4754]: E0218 20:15:23.752551 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514\": container with ID starting with 00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514 not found: ID does not exist" containerID="00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.752599 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514"} err="failed to get container status \"00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514\": rpc error: code = NotFound desc = could not find container \"00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514\": container with ID starting with 00cce9f62031b011b253af27c2eabb8c41d4407c4ff8f0f0097240b1a7b3f514 not found: ID does not exist" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.752628 4754 scope.go:117] "RemoveContainer" containerID="2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76" Feb 18 20:15:23 crc kubenswrapper[4754]: E0218 20:15:23.752928 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76\": container with ID starting with 2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76 not found: ID does not exist" containerID="2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.752965 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76"} err="failed to get container status \"2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76\": rpc error: code = NotFound desc = could not find container \"2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76\": container with ID starting with 2c069df2a0eca86fdb74ab6e28b46f3303fc108dce9d6a97e8737750de432d76 not found: ID does not exist" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.753015 4754 scope.go:117] "RemoveContainer" containerID="66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e" Feb 18 20:15:23 crc kubenswrapper[4754]: E0218 20:15:23.753447 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e\": container with ID starting with 66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e not found: ID does not exist" containerID="66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e" Feb 18 20:15:23 crc kubenswrapper[4754]: I0218 20:15:23.753473 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e"} err="failed to get container status \"66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e\": rpc error: code = NotFound desc = could not find container \"66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e\": container with ID starting with 66636b5a98810513e8f4502d35476a4f267b8850e95d39166955b43208c5458e not found: ID does not exist" Feb 18 20:15:24 crc kubenswrapper[4754]: I0218 20:15:24.225049 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" path="/var/lib/kubelet/pods/c5c5277a-2b23-4626-8c8e-52fc4905971f/volumes" Feb 18 20:15:39 crc kubenswrapper[4754]: I0218 20:15:39.579408 4754 scope.go:117] "RemoveContainer" containerID="1d5c8301a28d5b184f3e221e359fab418cbb3c0d6c7e6bceffe4b6910caa3077" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.914959 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pfltq"] Feb 18 20:16:19 crc kubenswrapper[4754]: E0218 20:16:19.917350 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="registry-server" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.917593 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="registry-server" Feb 18 20:16:19 crc kubenswrapper[4754]: E0218 20:16:19.917683 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="extract-utilities" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.917756 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="extract-utilities" Feb 18 20:16:19 crc kubenswrapper[4754]: E0218 20:16:19.917858 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="extract-content" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.917975 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="extract-content" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.918357 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c5277a-2b23-4626-8c8e-52fc4905971f" containerName="registry-server" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.920037 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.928829 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfltq"] Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.964577 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-catalog-content\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.964684 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-utilities\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:19 crc kubenswrapper[4754]: I0218 20:16:19.964873 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62282\" (UniqueName: \"kubernetes.io/projected/48a1d3a8-12da-42b9-814f-1c07fc72083f-kube-api-access-62282\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.066416 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62282\" (UniqueName: \"kubernetes.io/projected/48a1d3a8-12da-42b9-814f-1c07fc72083f-kube-api-access-62282\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.067081 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-catalog-content\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.067241 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-utilities\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.067662 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-catalog-content\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.067670 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-utilities\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.094024 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62282\" (UniqueName: \"kubernetes.io/projected/48a1d3a8-12da-42b9-814f-1c07fc72083f-kube-api-access-62282\") pod \"redhat-marketplace-pfltq\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.289796 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:20 crc kubenswrapper[4754]: I0218 20:16:20.765974 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfltq"] Feb 18 20:16:21 crc kubenswrapper[4754]: I0218 20:16:21.368818 4754 generic.go:334] "Generic (PLEG): container finished" podID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerID="de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13" exitCode=0 Feb 18 20:16:21 crc kubenswrapper[4754]: I0218 20:16:21.369112 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerDied","Data":"de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13"} Feb 18 20:16:21 crc kubenswrapper[4754]: I0218 20:16:21.369157 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerStarted","Data":"47b8b95965b5316f32fdfe6159e0e1f66aa42c90ae8901fd78c5b3d4b8df9156"} Feb 18 20:16:22 crc kubenswrapper[4754]: I0218 20:16:22.381308 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerStarted","Data":"47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06"} Feb 18 20:16:23 crc kubenswrapper[4754]: I0218 20:16:23.394426 4754 generic.go:334] "Generic (PLEG): container finished" podID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerID="47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06" exitCode=0 Feb 18 20:16:23 crc kubenswrapper[4754]: I0218 20:16:23.394565 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerDied","Data":"47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06"} Feb 18 20:16:24 crc kubenswrapper[4754]: I0218 20:16:24.431509 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerStarted","Data":"4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf"} Feb 18 20:16:24 crc kubenswrapper[4754]: I0218 20:16:24.454523 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pfltq" podStartSLOduration=3.02445297 podStartE2EDuration="5.454501433s" podCreationTimestamp="2026-02-18 20:16:19 +0000 UTC" firstStartedPulling="2026-02-18 20:16:21.370808746 +0000 UTC m=+3483.821221542" lastFinishedPulling="2026-02-18 20:16:23.800857199 +0000 UTC m=+3486.251270005" observedRunningTime="2026-02-18 20:16:24.451021155 +0000 UTC m=+3486.901433951" watchObservedRunningTime="2026-02-18 20:16:24.454501433 +0000 UTC m=+3486.904914229" Feb 18 20:16:30 crc kubenswrapper[4754]: I0218 20:16:30.291067 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:30 crc kubenswrapper[4754]: I0218 20:16:30.294386 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:30 crc kubenswrapper[4754]: I0218 20:16:30.345038 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:30 crc kubenswrapper[4754]: I0218 20:16:30.549622 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:30 crc kubenswrapper[4754]: I0218 20:16:30.606333 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfltq"] Feb 18 20:16:32 crc kubenswrapper[4754]: I0218 20:16:32.497523 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pfltq" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="registry-server" containerID="cri-o://4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf" gracePeriod=2 Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.012408 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.036281 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-catalog-content\") pod \"48a1d3a8-12da-42b9-814f-1c07fc72083f\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.036407 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-utilities\") pod \"48a1d3a8-12da-42b9-814f-1c07fc72083f\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.036450 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62282\" (UniqueName: \"kubernetes.io/projected/48a1d3a8-12da-42b9-814f-1c07fc72083f-kube-api-access-62282\") pod \"48a1d3a8-12da-42b9-814f-1c07fc72083f\" (UID: \"48a1d3a8-12da-42b9-814f-1c07fc72083f\") " Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.037196 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-utilities" (OuterVolumeSpecName: "utilities") pod "48a1d3a8-12da-42b9-814f-1c07fc72083f" (UID: "48a1d3a8-12da-42b9-814f-1c07fc72083f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.044119 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a1d3a8-12da-42b9-814f-1c07fc72083f-kube-api-access-62282" (OuterVolumeSpecName: "kube-api-access-62282") pod "48a1d3a8-12da-42b9-814f-1c07fc72083f" (UID: "48a1d3a8-12da-42b9-814f-1c07fc72083f"). InnerVolumeSpecName "kube-api-access-62282". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.076355 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48a1d3a8-12da-42b9-814f-1c07fc72083f" (UID: "48a1d3a8-12da-42b9-814f-1c07fc72083f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.139200 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.139480 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a1d3a8-12da-42b9-814f-1c07fc72083f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.139493 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62282\" (UniqueName: \"kubernetes.io/projected/48a1d3a8-12da-42b9-814f-1c07fc72083f-kube-api-access-62282\") on node \"crc\" DevicePath \"\"" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.508551 4754 generic.go:334] "Generic (PLEG): container finished" podID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerID="4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf" exitCode=0 Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.508610 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerDied","Data":"4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf"} Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.508646 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfltq" event={"ID":"48a1d3a8-12da-42b9-814f-1c07fc72083f","Type":"ContainerDied","Data":"47b8b95965b5316f32fdfe6159e0e1f66aa42c90ae8901fd78c5b3d4b8df9156"} Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.508675 4754 scope.go:117] "RemoveContainer" containerID="4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.508615 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfltq" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.541336 4754 scope.go:117] "RemoveContainer" containerID="47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.559052 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfltq"] Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.568045 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfltq"] Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.573576 4754 scope.go:117] "RemoveContainer" containerID="de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.623410 4754 scope.go:117] "RemoveContainer" containerID="4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf" Feb 18 20:16:33 crc kubenswrapper[4754]: E0218 20:16:33.623831 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf\": container with ID starting with 4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf not found: ID does not exist" containerID="4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.623896 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf"} err="failed to get container status \"4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf\": rpc error: code = NotFound desc = could not find container \"4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf\": container with ID starting with 4cfebe387a033e29ea62f38c306a9100012cbd14dbd735ae79d9b4258e99b9bf not found: ID does not exist" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.623930 4754 scope.go:117] "RemoveContainer" containerID="47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06" Feb 18 20:16:33 crc kubenswrapper[4754]: E0218 20:16:33.624674 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06\": container with ID starting with 47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06 not found: ID does not exist" containerID="47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.624739 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06"} err="failed to get container status \"47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06\": rpc error: code = NotFound desc = could not find container \"47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06\": container with ID starting with 47a1227eb2706616afbfa5bd6d1bd714a84fc7a9a819d886302b0c37aec9af06 not found: ID does not exist" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.624788 4754 scope.go:117] "RemoveContainer" containerID="de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13" Feb 18 20:16:33 crc kubenswrapper[4754]: E0218 20:16:33.625124 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13\": container with ID starting with de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13 not found: ID does not exist" containerID="de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13" Feb 18 20:16:33 crc kubenswrapper[4754]: I0218 20:16:33.625175 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13"} err="failed to get container status \"de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13\": rpc error: code = NotFound desc = could not find container \"de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13\": container with ID starting with de514abb2f4907554d0b5f8c5d7b56668377d53a84e46ded4ec4f8fc594d0c13 not found: ID does not exist" Feb 18 20:16:34 crc kubenswrapper[4754]: I0218 20:16:34.219480 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" path="/var/lib/kubelet/pods/48a1d3a8-12da-42b9-814f-1c07fc72083f/volumes" Feb 18 20:16:38 crc kubenswrapper[4754]: I0218 20:16:38.097045 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:16:38 crc kubenswrapper[4754]: I0218 20:16:38.097866 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:08 crc kubenswrapper[4754]: I0218 20:17:08.097333 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:17:08 crc kubenswrapper[4754]: I0218 20:17:08.097887 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.141640 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-27gcv"] Feb 18 20:17:25 crc kubenswrapper[4754]: E0218 20:17:25.142593 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="registry-server" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.142611 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="registry-server" Feb 18 20:17:25 crc kubenswrapper[4754]: E0218 20:17:25.142659 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="extract-content" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.142668 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="extract-content" Feb 18 20:17:25 crc kubenswrapper[4754]: E0218 20:17:25.142686 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="extract-utilities" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.142694 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="extract-utilities" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.143005 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a1d3a8-12da-42b9-814f-1c07fc72083f" containerName="registry-server" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.145431 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.181521 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27gcv"] Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.332895 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-utilities\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.333282 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zcw\" (UniqueName: \"kubernetes.io/projected/8e62c956-1399-4e2d-b26f-e5d993d1834f-kube-api-access-p7zcw\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.333420 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-catalog-content\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.435965 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-catalog-content\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.436117 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-utilities\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.436200 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7zcw\" (UniqueName: \"kubernetes.io/projected/8e62c956-1399-4e2d-b26f-e5d993d1834f-kube-api-access-p7zcw\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.436654 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-utilities\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.436672 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-catalog-content\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.467445 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7zcw\" (UniqueName: \"kubernetes.io/projected/8e62c956-1399-4e2d-b26f-e5d993d1834f-kube-api-access-p7zcw\") pod \"redhat-operators-27gcv\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.472280 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:25 crc kubenswrapper[4754]: I0218 20:17:25.946050 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-27gcv"] Feb 18 20:17:26 crc kubenswrapper[4754]: I0218 20:17:26.037028 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerStarted","Data":"9cc6c37f73d9484c87d7d5f9ed3396ef81b4c1e0b54ec4612871c15d0951d34b"} Feb 18 20:17:27 crc kubenswrapper[4754]: I0218 20:17:27.047382 4754 generic.go:334] "Generic (PLEG): container finished" podID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerID="2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677" exitCode=0 Feb 18 20:17:27 crc kubenswrapper[4754]: I0218 20:17:27.047442 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerDied","Data":"2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677"} Feb 18 20:17:27 crc kubenswrapper[4754]: I0218 20:17:27.049591 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:17:28 crc kubenswrapper[4754]: I0218 20:17:28.058110 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerStarted","Data":"4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79"} Feb 18 20:17:31 crc kubenswrapper[4754]: I0218 20:17:31.089117 4754 generic.go:334] "Generic (PLEG): container finished" podID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerID="4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79" exitCode=0 Feb 18 20:17:31 crc kubenswrapper[4754]: I0218 20:17:31.089671 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerDied","Data":"4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79"} Feb 18 20:17:32 crc kubenswrapper[4754]: I0218 20:17:32.103680 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerStarted","Data":"838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410"} Feb 18 20:17:32 crc kubenswrapper[4754]: I0218 20:17:32.127027 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-27gcv" podStartSLOduration=2.722297201 podStartE2EDuration="7.127008996s" podCreationTimestamp="2026-02-18 20:17:25 +0000 UTC" firstStartedPulling="2026-02-18 20:17:27.049339525 +0000 UTC m=+3549.499752321" lastFinishedPulling="2026-02-18 20:17:31.4540513 +0000 UTC m=+3553.904464116" observedRunningTime="2026-02-18 20:17:32.122439103 +0000 UTC m=+3554.572851899" watchObservedRunningTime="2026-02-18 20:17:32.127008996 +0000 UTC m=+3554.577421792" Feb 18 20:17:35 crc kubenswrapper[4754]: I0218 20:17:35.472693 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:35 crc kubenswrapper[4754]: I0218 20:17:35.473030 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:36 crc kubenswrapper[4754]: I0218 20:17:36.560825 4754 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-27gcv" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="registry-server" probeResult="failure" output=< Feb 18 20:17:36 crc kubenswrapper[4754]: timeout: failed to connect service ":50051" within 1s Feb 18 20:17:36 crc kubenswrapper[4754]: > Feb 18 20:17:38 crc kubenswrapper[4754]: I0218 20:17:38.096705 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:17:38 crc kubenswrapper[4754]: I0218 20:17:38.097177 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:38 crc kubenswrapper[4754]: I0218 20:17:38.097246 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:17:38 crc kubenswrapper[4754]: I0218 20:17:38.098483 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:17:38 crc kubenswrapper[4754]: I0218 20:17:38.098587 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" gracePeriod=600 Feb 18 20:17:38 crc kubenswrapper[4754]: E0218 20:17:38.223958 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:17:39 crc kubenswrapper[4754]: I0218 20:17:39.174482 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" exitCode=0 Feb 18 20:17:39 crc kubenswrapper[4754]: I0218 20:17:39.174533 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8"} Feb 18 20:17:39 crc kubenswrapper[4754]: I0218 20:17:39.174965 4754 scope.go:117] "RemoveContainer" containerID="18fb9f12ddc4eddd2e154981fe0e4a5a76e41bccdbad7fd1512501a33ef4bc25" Feb 18 20:17:39 crc kubenswrapper[4754]: I0218 20:17:39.176619 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:17:39 crc kubenswrapper[4754]: E0218 20:17:39.177295 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:17:45 crc kubenswrapper[4754]: I0218 20:17:45.514668 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:45 crc kubenswrapper[4754]: I0218 20:17:45.561808 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:45 crc kubenswrapper[4754]: I0218 20:17:45.749276 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27gcv"] Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.260944 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-27gcv" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="registry-server" containerID="cri-o://838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410" gracePeriod=2 Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.775832 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.886889 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-utilities\") pod \"8e62c956-1399-4e2d-b26f-e5d993d1834f\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.887065 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7zcw\" (UniqueName: \"kubernetes.io/projected/8e62c956-1399-4e2d-b26f-e5d993d1834f-kube-api-access-p7zcw\") pod \"8e62c956-1399-4e2d-b26f-e5d993d1834f\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.887133 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-catalog-content\") pod \"8e62c956-1399-4e2d-b26f-e5d993d1834f\" (UID: \"8e62c956-1399-4e2d-b26f-e5d993d1834f\") " Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.887734 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-utilities" (OuterVolumeSpecName: "utilities") pod "8e62c956-1399-4e2d-b26f-e5d993d1834f" (UID: "8e62c956-1399-4e2d-b26f-e5d993d1834f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.895631 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e62c956-1399-4e2d-b26f-e5d993d1834f-kube-api-access-p7zcw" (OuterVolumeSpecName: "kube-api-access-p7zcw") pod "8e62c956-1399-4e2d-b26f-e5d993d1834f" (UID: "8e62c956-1399-4e2d-b26f-e5d993d1834f"). InnerVolumeSpecName "kube-api-access-p7zcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.989386 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7zcw\" (UniqueName: \"kubernetes.io/projected/8e62c956-1399-4e2d-b26f-e5d993d1834f-kube-api-access-p7zcw\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:47 crc kubenswrapper[4754]: I0218 20:17:47.989417 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.028706 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e62c956-1399-4e2d-b26f-e5d993d1834f" (UID: "8e62c956-1399-4e2d-b26f-e5d993d1834f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.091291 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e62c956-1399-4e2d-b26f-e5d993d1834f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.274588 4754 generic.go:334] "Generic (PLEG): container finished" podID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerID="838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410" exitCode=0 Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.274636 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerDied","Data":"838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410"} Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.274682 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-27gcv" event={"ID":"8e62c956-1399-4e2d-b26f-e5d993d1834f","Type":"ContainerDied","Data":"9cc6c37f73d9484c87d7d5f9ed3396ef81b4c1e0b54ec4612871c15d0951d34b"} Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.274702 4754 scope.go:117] "RemoveContainer" containerID="838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.274746 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-27gcv" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.295797 4754 scope.go:117] "RemoveContainer" containerID="4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.320609 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-27gcv"] Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.338585 4754 scope.go:117] "RemoveContainer" containerID="2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.339450 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-27gcv"] Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.398854 4754 scope.go:117] "RemoveContainer" containerID="838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410" Feb 18 20:17:48 crc kubenswrapper[4754]: E0218 20:17:48.399405 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410\": container with ID starting with 838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410 not found: ID does not exist" containerID="838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.399438 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410"} err="failed to get container status \"838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410\": rpc error: code = NotFound desc = could not find container \"838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410\": container with ID starting with 838aed842ba86449028c27c373aa9eea57af5275063b203053fc08c6cbed2410 not found: ID does not exist" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.399461 4754 scope.go:117] "RemoveContainer" containerID="4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79" Feb 18 20:17:48 crc kubenswrapper[4754]: E0218 20:17:48.399790 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79\": container with ID starting with 4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79 not found: ID does not exist" containerID="4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.399855 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79"} err="failed to get container status \"4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79\": rpc error: code = NotFound desc = could not find container \"4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79\": container with ID starting with 4859d0fba69db45e29f0674cd549b6599d3b18b94e29c425d5db41f138f00a79 not found: ID does not exist" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.399888 4754 scope.go:117] "RemoveContainer" containerID="2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677" Feb 18 20:17:48 crc kubenswrapper[4754]: E0218 20:17:48.400492 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677\": container with ID starting with 2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677 not found: ID does not exist" containerID="2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677" Feb 18 20:17:48 crc kubenswrapper[4754]: I0218 20:17:48.400531 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677"} err="failed to get container status \"2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677\": rpc error: code = NotFound desc = could not find container \"2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677\": container with ID starting with 2943a084f9d212d510e4bb111cc19895d042d0aff11d5f55c0c25ad21a53b677 not found: ID does not exist" Feb 18 20:17:50 crc kubenswrapper[4754]: I0218 20:17:50.209976 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:17:50 crc kubenswrapper[4754]: E0218 20:17:50.210673 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:17:50 crc kubenswrapper[4754]: I0218 20:17:50.224317 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" path="/var/lib/kubelet/pods/8e62c956-1399-4e2d-b26f-e5d993d1834f/volumes" Feb 18 20:18:05 crc kubenswrapper[4754]: I0218 20:18:05.210747 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:18:05 crc kubenswrapper[4754]: E0218 20:18:05.211859 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:18:20 crc kubenswrapper[4754]: I0218 20:18:20.211407 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:18:20 crc kubenswrapper[4754]: E0218 20:18:20.213652 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:18:32 crc kubenswrapper[4754]: I0218 20:18:32.209652 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:18:32 crc kubenswrapper[4754]: E0218 20:18:32.210415 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:18:47 crc kubenswrapper[4754]: I0218 20:18:47.209522 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:18:47 crc kubenswrapper[4754]: E0218 20:18:47.210623 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:19:00 crc kubenswrapper[4754]: I0218 20:19:00.211427 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:19:00 crc kubenswrapper[4754]: E0218 20:19:00.212447 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:19:11 crc kubenswrapper[4754]: I0218 20:19:11.210350 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:19:11 crc kubenswrapper[4754]: E0218 20:19:11.211114 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:19:23 crc kubenswrapper[4754]: I0218 20:19:23.209602 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:19:23 crc kubenswrapper[4754]: E0218 20:19:23.210491 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:19:38 crc kubenswrapper[4754]: I0218 20:19:38.216735 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:19:38 crc kubenswrapper[4754]: E0218 20:19:38.218111 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:19:53 crc kubenswrapper[4754]: I0218 20:19:53.210744 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:19:53 crc kubenswrapper[4754]: E0218 20:19:53.211861 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:20:07 crc kubenswrapper[4754]: I0218 20:20:07.210351 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:20:07 crc kubenswrapper[4754]: E0218 20:20:07.211429 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:20:19 crc kubenswrapper[4754]: I0218 20:20:19.210425 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:20:19 crc kubenswrapper[4754]: E0218 20:20:19.211136 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:20:31 crc kubenswrapper[4754]: I0218 20:20:31.209254 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:20:31 crc kubenswrapper[4754]: E0218 20:20:31.210026 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:20:42 crc kubenswrapper[4754]: I0218 20:20:42.210017 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:20:42 crc kubenswrapper[4754]: E0218 20:20:42.211664 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:20:54 crc kubenswrapper[4754]: I0218 20:20:54.210056 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:20:54 crc kubenswrapper[4754]: E0218 20:20:54.221368 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:21:07 crc kubenswrapper[4754]: I0218 20:21:07.209739 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:21:07 crc kubenswrapper[4754]: E0218 20:21:07.210577 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:21:22 crc kubenswrapper[4754]: I0218 20:21:22.210754 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:21:22 crc kubenswrapper[4754]: E0218 20:21:22.231053 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:21:33 crc kubenswrapper[4754]: I0218 20:21:33.209231 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:21:33 crc kubenswrapper[4754]: E0218 20:21:33.209980 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:21:44 crc kubenswrapper[4754]: I0218 20:21:44.209761 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:21:44 crc kubenswrapper[4754]: E0218 20:21:44.210558 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:21:56 crc kubenswrapper[4754]: I0218 20:21:56.210311 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:21:56 crc kubenswrapper[4754]: E0218 20:21:56.211197 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:22:09 crc kubenswrapper[4754]: I0218 20:22:09.209723 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:22:09 crc kubenswrapper[4754]: E0218 20:22:09.210636 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:22:24 crc kubenswrapper[4754]: I0218 20:22:24.210029 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:22:24 crc kubenswrapper[4754]: E0218 20:22:24.211200 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:22:39 crc kubenswrapper[4754]: I0218 20:22:39.210325 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:22:39 crc kubenswrapper[4754]: I0218 20:22:39.908820 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"0cec986a50feba9cff892a790948bb12f7c85cde9c84fb8c0c348b46e1636064"} Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.861085 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmcrq"] Feb 18 20:23:28 crc kubenswrapper[4754]: E0218 20:23:28.862006 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="registry-server" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.862024 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="registry-server" Feb 18 20:23:28 crc kubenswrapper[4754]: E0218 20:23:28.862053 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="extract-utilities" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.862062 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="extract-utilities" Feb 18 20:23:28 crc kubenswrapper[4754]: E0218 20:23:28.862076 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="extract-content" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.862085 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="extract-content" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.862371 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e62c956-1399-4e2d-b26f-e5d993d1834f" containerName="registry-server" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.864103 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.875106 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmcrq"] Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.969837 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-catalog-content\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.970485 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-utilities\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:28 crc kubenswrapper[4754]: I0218 20:23:28.970652 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsw2\" (UniqueName: \"kubernetes.io/projected/fcd299a0-e117-426d-b190-0c81a5b7c795-kube-api-access-tjsw2\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.072496 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjsw2\" (UniqueName: \"kubernetes.io/projected/fcd299a0-e117-426d-b190-0c81a5b7c795-kube-api-access-tjsw2\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.072608 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-catalog-content\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.072691 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-utilities\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.073193 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-utilities\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.073640 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-catalog-content\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.091839 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjsw2\" (UniqueName: \"kubernetes.io/projected/fcd299a0-e117-426d-b190-0c81a5b7c795-kube-api-access-tjsw2\") pod \"certified-operators-nmcrq\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.201573 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:29 crc kubenswrapper[4754]: I0218 20:23:29.767594 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmcrq"] Feb 18 20:23:30 crc kubenswrapper[4754]: I0218 20:23:30.447561 4754 generic.go:334] "Generic (PLEG): container finished" podID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerID="72e05014bf8c5f078666d1539fa93e48b44763277bea83affea71abaa26cfa63" exitCode=0 Feb 18 20:23:30 crc kubenswrapper[4754]: I0218 20:23:30.447671 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerDied","Data":"72e05014bf8c5f078666d1539fa93e48b44763277bea83affea71abaa26cfa63"} Feb 18 20:23:30 crc kubenswrapper[4754]: I0218 20:23:30.447878 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerStarted","Data":"651958e92902e177fee849db775a251a3b140eee9f10e94e97e1be18b565003a"} Feb 18 20:23:30 crc kubenswrapper[4754]: I0218 20:23:30.449610 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:23:31 crc kubenswrapper[4754]: I0218 20:23:31.458317 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerStarted","Data":"494139df99c3f468aa3a8392a9be0334f58e53e63770cdd0eb556a0bb3acffdc"} Feb 18 20:23:33 crc kubenswrapper[4754]: I0218 20:23:33.479666 4754 generic.go:334] "Generic (PLEG): container finished" podID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerID="494139df99c3f468aa3a8392a9be0334f58e53e63770cdd0eb556a0bb3acffdc" exitCode=0 Feb 18 20:23:33 crc kubenswrapper[4754]: I0218 20:23:33.479784 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerDied","Data":"494139df99c3f468aa3a8392a9be0334f58e53e63770cdd0eb556a0bb3acffdc"} Feb 18 20:23:34 crc kubenswrapper[4754]: I0218 20:23:34.491695 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerStarted","Data":"b6e11953882a65fb0c34d929becdc74b0e507e4ae2097bbe0001256a5a40f688"} Feb 18 20:23:34 crc kubenswrapper[4754]: I0218 20:23:34.519376 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmcrq" podStartSLOduration=3.092778029 podStartE2EDuration="6.519355526s" podCreationTimestamp="2026-02-18 20:23:28 +0000 UTC" firstStartedPulling="2026-02-18 20:23:30.449392524 +0000 UTC m=+3912.899805320" lastFinishedPulling="2026-02-18 20:23:33.875970021 +0000 UTC m=+3916.326382817" observedRunningTime="2026-02-18 20:23:34.518734277 +0000 UTC m=+3916.969147073" watchObservedRunningTime="2026-02-18 20:23:34.519355526 +0000 UTC m=+3916.969768322" Feb 18 20:23:39 crc kubenswrapper[4754]: I0218 20:23:39.201851 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:39 crc kubenswrapper[4754]: I0218 20:23:39.203963 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:39 crc kubenswrapper[4754]: I0218 20:23:39.261661 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:39 crc kubenswrapper[4754]: I0218 20:23:39.645750 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:39 crc kubenswrapper[4754]: I0218 20:23:39.717058 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmcrq"] Feb 18 20:23:41 crc kubenswrapper[4754]: I0218 20:23:41.555596 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmcrq" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="registry-server" containerID="cri-o://b6e11953882a65fb0c34d929becdc74b0e507e4ae2097bbe0001256a5a40f688" gracePeriod=2 Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.570673 4754 generic.go:334] "Generic (PLEG): container finished" podID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerID="b6e11953882a65fb0c34d929becdc74b0e507e4ae2097bbe0001256a5a40f688" exitCode=0 Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.570845 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerDied","Data":"b6e11953882a65fb0c34d929becdc74b0e507e4ae2097bbe0001256a5a40f688"} Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.764455 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.882077 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-catalog-content\") pod \"fcd299a0-e117-426d-b190-0c81a5b7c795\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.882331 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-utilities\") pod \"fcd299a0-e117-426d-b190-0c81a5b7c795\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.882406 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjsw2\" (UniqueName: \"kubernetes.io/projected/fcd299a0-e117-426d-b190-0c81a5b7c795-kube-api-access-tjsw2\") pod \"fcd299a0-e117-426d-b190-0c81a5b7c795\" (UID: \"fcd299a0-e117-426d-b190-0c81a5b7c795\") " Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.882849 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-utilities" (OuterVolumeSpecName: "utilities") pod "fcd299a0-e117-426d-b190-0c81a5b7c795" (UID: "fcd299a0-e117-426d-b190-0c81a5b7c795"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.884044 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.934883 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcd299a0-e117-426d-b190-0c81a5b7c795" (UID: "fcd299a0-e117-426d-b190-0c81a5b7c795"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:23:42 crc kubenswrapper[4754]: I0218 20:23:42.985321 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd299a0-e117-426d-b190-0c81a5b7c795-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.280505 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd299a0-e117-426d-b190-0c81a5b7c795-kube-api-access-tjsw2" (OuterVolumeSpecName: "kube-api-access-tjsw2") pod "fcd299a0-e117-426d-b190-0c81a5b7c795" (UID: "fcd299a0-e117-426d-b190-0c81a5b7c795"). InnerVolumeSpecName "kube-api-access-tjsw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.290823 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjsw2\" (UniqueName: \"kubernetes.io/projected/fcd299a0-e117-426d-b190-0c81a5b7c795-kube-api-access-tjsw2\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.582929 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmcrq" event={"ID":"fcd299a0-e117-426d-b190-0c81a5b7c795","Type":"ContainerDied","Data":"651958e92902e177fee849db775a251a3b140eee9f10e94e97e1be18b565003a"} Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.582978 4754 scope.go:117] "RemoveContainer" containerID="b6e11953882a65fb0c34d929becdc74b0e507e4ae2097bbe0001256a5a40f688" Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.582981 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmcrq" Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.613374 4754 scope.go:117] "RemoveContainer" containerID="494139df99c3f468aa3a8392a9be0334f58e53e63770cdd0eb556a0bb3acffdc" Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.624020 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmcrq"] Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.637756 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmcrq"] Feb 18 20:23:43 crc kubenswrapper[4754]: I0218 20:23:43.652468 4754 scope.go:117] "RemoveContainer" containerID="72e05014bf8c5f078666d1539fa93e48b44763277bea83affea71abaa26cfa63" Feb 18 20:23:44 crc kubenswrapper[4754]: I0218 20:23:44.222746 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" path="/var/lib/kubelet/pods/fcd299a0-e117-426d-b190-0c81a5b7c795/volumes" Feb 18 20:25:08 crc kubenswrapper[4754]: I0218 20:25:08.096727 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:25:08 crc kubenswrapper[4754]: I0218 20:25:08.097280 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:25:38 crc kubenswrapper[4754]: I0218 20:25:38.097115 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:25:38 crc kubenswrapper[4754]: I0218 20:25:38.097660 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.318995 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-drpgw"] Feb 18 20:25:48 crc kubenswrapper[4754]: E0218 20:25:48.320108 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="extract-utilities" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.320154 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="extract-utilities" Feb 18 20:25:48 crc kubenswrapper[4754]: E0218 20:25:48.320174 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="extract-content" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.320183 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="extract-content" Feb 18 20:25:48 crc kubenswrapper[4754]: E0218 20:25:48.320233 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="registry-server" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.320241 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="registry-server" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.320480 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd299a0-e117-426d-b190-0c81a5b7c795" containerName="registry-server" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.322259 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.345763 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-drpgw"] Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.403659 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-utilities\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.404216 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wnb\" (UniqueName: \"kubernetes.io/projected/57a8c5b1-fbfe-4990-a5b0-8038580b3057-kube-api-access-75wnb\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.404354 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-catalog-content\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.505815 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wnb\" (UniqueName: \"kubernetes.io/projected/57a8c5b1-fbfe-4990-a5b0-8038580b3057-kube-api-access-75wnb\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.505915 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-catalog-content\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.505989 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-utilities\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.506536 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-catalog-content\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.506580 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-utilities\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.538415 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wnb\" (UniqueName: \"kubernetes.io/projected/57a8c5b1-fbfe-4990-a5b0-8038580b3057-kube-api-access-75wnb\") pod \"community-operators-drpgw\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:48 crc kubenswrapper[4754]: I0218 20:25:48.653560 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:49 crc kubenswrapper[4754]: I0218 20:25:49.237498 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-drpgw"] Feb 18 20:25:49 crc kubenswrapper[4754]: I0218 20:25:49.780745 4754 generic.go:334] "Generic (PLEG): container finished" podID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerID="bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3" exitCode=0 Feb 18 20:25:49 crc kubenswrapper[4754]: I0218 20:25:49.780792 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerDied","Data":"bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3"} Feb 18 20:25:49 crc kubenswrapper[4754]: I0218 20:25:49.781246 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerStarted","Data":"ba57c54f8e103e4e516e5696c117f298053a805e69e5ebc00e098a5b3e0ed817"} Feb 18 20:25:50 crc kubenswrapper[4754]: I0218 20:25:50.794537 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerStarted","Data":"272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39"} Feb 18 20:25:52 crc kubenswrapper[4754]: I0218 20:25:52.824577 4754 generic.go:334] "Generic (PLEG): container finished" podID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerID="272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39" exitCode=0 Feb 18 20:25:52 crc kubenswrapper[4754]: I0218 20:25:52.825261 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerDied","Data":"272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39"} Feb 18 20:25:53 crc kubenswrapper[4754]: I0218 20:25:53.835461 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerStarted","Data":"5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1"} Feb 18 20:25:53 crc kubenswrapper[4754]: I0218 20:25:53.862177 4754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-drpgw" podStartSLOduration=2.367922976 podStartE2EDuration="5.862153814s" podCreationTimestamp="2026-02-18 20:25:48 +0000 UTC" firstStartedPulling="2026-02-18 20:25:49.785790383 +0000 UTC m=+4052.236203190" lastFinishedPulling="2026-02-18 20:25:53.280021232 +0000 UTC m=+4055.730434028" observedRunningTime="2026-02-18 20:25:53.854238949 +0000 UTC m=+4056.304651765" watchObservedRunningTime="2026-02-18 20:25:53.862153814 +0000 UTC m=+4056.312566630" Feb 18 20:25:58 crc kubenswrapper[4754]: I0218 20:25:58.654603 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:58 crc kubenswrapper[4754]: I0218 20:25:58.655519 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:58 crc kubenswrapper[4754]: I0218 20:25:58.711237 4754 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:58 crc kubenswrapper[4754]: I0218 20:25:58.929888 4754 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:25:58 crc kubenswrapper[4754]: I0218 20:25:58.986648 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-drpgw"] Feb 18 20:26:00 crc kubenswrapper[4754]: I0218 20:26:00.905879 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-drpgw" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="registry-server" containerID="cri-o://5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1" gracePeriod=2 Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.435356 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.574774 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75wnb\" (UniqueName: \"kubernetes.io/projected/57a8c5b1-fbfe-4990-a5b0-8038580b3057-kube-api-access-75wnb\") pod \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.575219 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-catalog-content\") pod \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.575315 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-utilities\") pod \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\" (UID: \"57a8c5b1-fbfe-4990-a5b0-8038580b3057\") " Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.577312 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-utilities" (OuterVolumeSpecName: "utilities") pod "57a8c5b1-fbfe-4990-a5b0-8038580b3057" (UID: "57a8c5b1-fbfe-4990-a5b0-8038580b3057"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.578232 4754 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.591444 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a8c5b1-fbfe-4990-a5b0-8038580b3057-kube-api-access-75wnb" (OuterVolumeSpecName: "kube-api-access-75wnb") pod "57a8c5b1-fbfe-4990-a5b0-8038580b3057" (UID: "57a8c5b1-fbfe-4990-a5b0-8038580b3057"). InnerVolumeSpecName "kube-api-access-75wnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.635613 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a8c5b1-fbfe-4990-a5b0-8038580b3057" (UID: "57a8c5b1-fbfe-4990-a5b0-8038580b3057"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.680448 4754 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a8c5b1-fbfe-4990-a5b0-8038580b3057-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.680497 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75wnb\" (UniqueName: \"kubernetes.io/projected/57a8c5b1-fbfe-4990-a5b0-8038580b3057-kube-api-access-75wnb\") on node \"crc\" DevicePath \"\"" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.920203 4754 generic.go:334] "Generic (PLEG): container finished" podID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerID="5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1" exitCode=0 Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.920280 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerDied","Data":"5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1"} Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.920325 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-drpgw" event={"ID":"57a8c5b1-fbfe-4990-a5b0-8038580b3057","Type":"ContainerDied","Data":"ba57c54f8e103e4e516e5696c117f298053a805e69e5ebc00e098a5b3e0ed817"} Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.920352 4754 scope.go:117] "RemoveContainer" containerID="5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.920576 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-drpgw" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.943200 4754 scope.go:117] "RemoveContainer" containerID="272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39" Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.960094 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-drpgw"] Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.970208 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-drpgw"] Feb 18 20:26:01 crc kubenswrapper[4754]: I0218 20:26:01.994463 4754 scope.go:117] "RemoveContainer" containerID="bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.052881 4754 scope.go:117] "RemoveContainer" containerID="5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1" Feb 18 20:26:02 crc kubenswrapper[4754]: E0218 20:26:02.053579 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1\": container with ID starting with 5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1 not found: ID does not exist" containerID="5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.053628 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1"} err="failed to get container status \"5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1\": rpc error: code = NotFound desc = could not find container \"5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1\": container with ID starting with 5e56a41d1852032f79377aaf035b2e3cde43e5cd4b30ee066328a07d3dc009f1 not found: ID does not exist" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.053661 4754 scope.go:117] "RemoveContainer" containerID="272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39" Feb 18 20:26:02 crc kubenswrapper[4754]: E0218 20:26:02.054078 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39\": container with ID starting with 272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39 not found: ID does not exist" containerID="272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.054361 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39"} err="failed to get container status \"272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39\": rpc error: code = NotFound desc = could not find container \"272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39\": container with ID starting with 272984689a0e9f66d17d81a43cb308ba6ba06d796bb48ff570501e2c8895df39 not found: ID does not exist" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.054578 4754 scope.go:117] "RemoveContainer" containerID="bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3" Feb 18 20:26:02 crc kubenswrapper[4754]: E0218 20:26:02.055540 4754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3\": container with ID starting with bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3 not found: ID does not exist" containerID="bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.055603 4754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3"} err="failed to get container status \"bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3\": rpc error: code = NotFound desc = could not find container \"bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3\": container with ID starting with bd8b1639fd61932e71a4e73019e434304b501d8c310c1f8869758e722d91e8f3 not found: ID does not exist" Feb 18 20:26:02 crc kubenswrapper[4754]: I0218 20:26:02.230752 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" path="/var/lib/kubelet/pods/57a8c5b1-fbfe-4990-a5b0-8038580b3057/volumes" Feb 18 20:26:08 crc kubenswrapper[4754]: I0218 20:26:08.097274 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:26:08 crc kubenswrapper[4754]: I0218 20:26:08.098373 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:26:08 crc kubenswrapper[4754]: I0218 20:26:08.098451 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:26:08 crc kubenswrapper[4754]: I0218 20:26:08.099643 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cec986a50feba9cff892a790948bb12f7c85cde9c84fb8c0c348b46e1636064"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:26:08 crc kubenswrapper[4754]: I0218 20:26:08.099717 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://0cec986a50feba9cff892a790948bb12f7c85cde9c84fb8c0c348b46e1636064" gracePeriod=600 Feb 18 20:26:09 crc kubenswrapper[4754]: I0218 20:26:09.016308 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="0cec986a50feba9cff892a790948bb12f7c85cde9c84fb8c0c348b46e1636064" exitCode=0 Feb 18 20:26:09 crc kubenswrapper[4754]: I0218 20:26:09.016494 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"0cec986a50feba9cff892a790948bb12f7c85cde9c84fb8c0c348b46e1636064"} Feb 18 20:26:09 crc kubenswrapper[4754]: I0218 20:26:09.016793 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e"} Feb 18 20:26:09 crc kubenswrapper[4754]: I0218 20:26:09.016844 4754 scope.go:117] "RemoveContainer" containerID="bdd198bb072db52ca4e59a64b05abd5b0f1549681e7fb5fc42c496466a0a8ff8" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.651259 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sx4rr"] Feb 18 20:27:51 crc kubenswrapper[4754]: E0218 20:27:51.652374 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="extract-utilities" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.652393 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="extract-utilities" Feb 18 20:27:51 crc kubenswrapper[4754]: E0218 20:27:51.652421 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="extract-content" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.652428 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="extract-content" Feb 18 20:27:51 crc kubenswrapper[4754]: E0218 20:27:51.652445 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="registry-server" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.652454 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="registry-server" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.652694 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a8c5b1-fbfe-4990-a5b0-8038580b3057" containerName="registry-server" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.654774 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.671176 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx4rr"] Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.682076 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbdj\" (UniqueName: \"kubernetes.io/projected/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-kube-api-access-jcbdj\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.682156 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-utilities\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.682302 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-catalog-content\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.784030 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbdj\" (UniqueName: \"kubernetes.io/projected/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-kube-api-access-jcbdj\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.784080 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-utilities\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.784160 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-catalog-content\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.784584 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-catalog-content\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.785044 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-utilities\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.812923 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbdj\" (UniqueName: \"kubernetes.io/projected/63eff0ce-71c0-481b-8a6f-14a6f07f3aa9-kube-api-access-jcbdj\") pod \"redhat-operators-sx4rr\" (UID: \"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9\") " pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:51 crc kubenswrapper[4754]: I0218 20:27:51.989195 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx4rr" Feb 18 20:27:52 crc kubenswrapper[4754]: I0218 20:27:52.501069 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx4rr"] Feb 18 20:27:52 crc kubenswrapper[4754]: E0218 20:27:52.914641 4754 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63eff0ce_71c0_481b_8a6f_14a6f07f3aa9.slice/crio-887a7d120c6c87090768382d3f59abb7fa7a76d4414f724daf15c6d155ab2585.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63eff0ce_71c0_481b_8a6f_14a6f07f3aa9.slice/crio-conmon-887a7d120c6c87090768382d3f59abb7fa7a76d4414f724daf15c6d155ab2585.scope\": RecentStats: unable to find data in memory cache]" Feb 18 20:27:53 crc kubenswrapper[4754]: I0218 20:27:53.014491 4754 generic.go:334] "Generic (PLEG): container finished" podID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" containerID="887a7d120c6c87090768382d3f59abb7fa7a76d4414f724daf15c6d155ab2585" exitCode=0 Feb 18 20:27:53 crc kubenswrapper[4754]: I0218 20:27:53.014532 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4rr" event={"ID":"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9","Type":"ContainerDied","Data":"887a7d120c6c87090768382d3f59abb7fa7a76d4414f724daf15c6d155ab2585"} Feb 18 20:27:53 crc kubenswrapper[4754]: I0218 20:27:53.014557 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4rr" event={"ID":"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9","Type":"ContainerStarted","Data":"b7b7304525d239f5308cb0cf8729877764e125bc87e00d4723ced8721917655f"} Feb 18 20:27:54 crc kubenswrapper[4754]: I0218 20:27:54.024106 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4rr" event={"ID":"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9","Type":"ContainerStarted","Data":"40d981488600e08b39495ac73c250e9ee4eae28d851cff22e4393becad4ec2cd"} Feb 18 20:27:55 crc kubenswrapper[4754]: I0218 20:27:55.042109 4754 generic.go:334] "Generic (PLEG): container finished" podID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" containerID="40d981488600e08b39495ac73c250e9ee4eae28d851cff22e4393becad4ec2cd" exitCode=0 Feb 18 20:27:55 crc kubenswrapper[4754]: I0218 20:27:55.042209 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4rr" event={"ID":"63eff0ce-71c0-481b-8a6f-14a6f07f3aa9","Type":"ContainerDied","Data":"40d981488600e08b39495ac73c250e9ee4eae28d851cff22e4393becad4ec2cd"} Feb 18 20:27:55 crc kubenswrapper[4754]: E0218 20:27:55.391213 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:27:55 crc kubenswrapper[4754]: E0218 20:27:55.391790 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:27:55 crc kubenswrapper[4754]: E0218 20:27:55.393201 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:27:56 crc kubenswrapper[4754]: E0218 20:27:56.060011 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:28:08 crc kubenswrapper[4754]: I0218 20:28:08.096634 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:28:08 crc kubenswrapper[4754]: I0218 20:28:08.097347 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:28:10 crc kubenswrapper[4754]: E0218 20:28:10.286291 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:28:10 crc kubenswrapper[4754]: E0218 20:28:10.286768 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:28:10 crc kubenswrapper[4754]: E0218 20:28:10.287990 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:28:23 crc kubenswrapper[4754]: E0218 20:28:23.213549 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:28:38 crc kubenswrapper[4754]: I0218 20:28:38.097653 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:28:38 crc kubenswrapper[4754]: I0218 20:28:38.098290 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:28:38 crc kubenswrapper[4754]: I0218 20:28:38.227102 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:28:38 crc kubenswrapper[4754]: E0218 20:28:38.627365 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:28:38 crc kubenswrapper[4754]: E0218 20:28:38.627668 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:28:38 crc kubenswrapper[4754]: E0218 20:28:38.628951 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:28:52 crc kubenswrapper[4754]: E0218 20:28:52.212851 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:29:07 crc kubenswrapper[4754]: E0218 20:29:07.211841 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.096597 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.096989 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.097062 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.098396 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.098515 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" gracePeriod=600 Feb 18 20:29:08 crc kubenswrapper[4754]: E0218 20:29:08.430653 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.873073 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" exitCode=0 Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.873132 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e"} Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.873194 4754 scope.go:117] "RemoveContainer" containerID="0cec986a50feba9cff892a790948bb12f7c85cde9c84fb8c0c348b46e1636064" Feb 18 20:29:08 crc kubenswrapper[4754]: I0218 20:29:08.874353 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:29:08 crc kubenswrapper[4754]: E0218 20:29:08.874859 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:29:21 crc kubenswrapper[4754]: E0218 20:29:21.798401 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:29:21 crc kubenswrapper[4754]: E0218 20:29:21.799293 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:29:21 crc kubenswrapper[4754]: E0218 20:29:21.800985 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:29:22 crc kubenswrapper[4754]: I0218 20:29:22.209662 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:29:22 crc kubenswrapper[4754]: E0218 20:29:22.210012 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:29:36 crc kubenswrapper[4754]: I0218 20:29:36.209984 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:29:36 crc kubenswrapper[4754]: E0218 20:29:36.210960 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:29:37 crc kubenswrapper[4754]: E0218 20:29:37.214288 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:29:47 crc kubenswrapper[4754]: I0218 20:29:47.210342 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:29:47 crc kubenswrapper[4754]: E0218 20:29:47.211178 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:29:52 crc kubenswrapper[4754]: E0218 20:29:52.212517 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.179631 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884"] Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.181777 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.185214 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.185343 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.192675 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884"] Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.210423 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:30:00 crc kubenswrapper[4754]: E0218 20:30:00.210746 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.354921 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04df1a11-c308-49ba-a34a-74bf9346a01e-config-volume\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.355038 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04df1a11-c308-49ba-a34a-74bf9346a01e-secret-volume\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.355097 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8sk\" (UniqueName: \"kubernetes.io/projected/04df1a11-c308-49ba-a34a-74bf9346a01e-kube-api-access-bv8sk\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.457791 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04df1a11-c308-49ba-a34a-74bf9346a01e-config-volume\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.457856 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04df1a11-c308-49ba-a34a-74bf9346a01e-secret-volume\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.457905 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv8sk\" (UniqueName: \"kubernetes.io/projected/04df1a11-c308-49ba-a34a-74bf9346a01e-kube-api-access-bv8sk\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.458726 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04df1a11-c308-49ba-a34a-74bf9346a01e-config-volume\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.463807 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04df1a11-c308-49ba-a34a-74bf9346a01e-secret-volume\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.474447 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv8sk\" (UniqueName: \"kubernetes.io/projected/04df1a11-c308-49ba-a34a-74bf9346a01e-kube-api-access-bv8sk\") pod \"collect-profiles-29524110-t9884\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.507296 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:00 crc kubenswrapper[4754]: I0218 20:30:00.980324 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884"] Feb 18 20:30:01 crc kubenswrapper[4754]: I0218 20:30:01.410979 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" event={"ID":"04df1a11-c308-49ba-a34a-74bf9346a01e","Type":"ContainerStarted","Data":"6aaab4659797f2ba89d1f93b046efb20e9f6e2343782a648f5ce4013b0f16e77"} Feb 18 20:30:02 crc kubenswrapper[4754]: I0218 20:30:02.420493 4754 generic.go:334] "Generic (PLEG): container finished" podID="04df1a11-c308-49ba-a34a-74bf9346a01e" containerID="7f586290d0b3acba2b234ff82ebbcadaca4b80ae4e212f067f32e423f966046a" exitCode=0 Feb 18 20:30:02 crc kubenswrapper[4754]: I0218 20:30:02.420743 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" event={"ID":"04df1a11-c308-49ba-a34a-74bf9346a01e","Type":"ContainerDied","Data":"7f586290d0b3acba2b234ff82ebbcadaca4b80ae4e212f067f32e423f966046a"} Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.772561 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.928954 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04df1a11-c308-49ba-a34a-74bf9346a01e-config-volume\") pod \"04df1a11-c308-49ba-a34a-74bf9346a01e\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.929243 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv8sk\" (UniqueName: \"kubernetes.io/projected/04df1a11-c308-49ba-a34a-74bf9346a01e-kube-api-access-bv8sk\") pod \"04df1a11-c308-49ba-a34a-74bf9346a01e\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.929346 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04df1a11-c308-49ba-a34a-74bf9346a01e-secret-volume\") pod \"04df1a11-c308-49ba-a34a-74bf9346a01e\" (UID: \"04df1a11-c308-49ba-a34a-74bf9346a01e\") " Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.930395 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04df1a11-c308-49ba-a34a-74bf9346a01e-config-volume" (OuterVolumeSpecName: "config-volume") pod "04df1a11-c308-49ba-a34a-74bf9346a01e" (UID: "04df1a11-c308-49ba-a34a-74bf9346a01e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.931743 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04df1a11-c308-49ba-a34a-74bf9346a01e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.936616 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04df1a11-c308-49ba-a34a-74bf9346a01e-kube-api-access-bv8sk" (OuterVolumeSpecName: "kube-api-access-bv8sk") pod "04df1a11-c308-49ba-a34a-74bf9346a01e" (UID: "04df1a11-c308-49ba-a34a-74bf9346a01e"). InnerVolumeSpecName "kube-api-access-bv8sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4754]: I0218 20:30:03.936220 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04df1a11-c308-49ba-a34a-74bf9346a01e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04df1a11-c308-49ba-a34a-74bf9346a01e" (UID: "04df1a11-c308-49ba-a34a-74bf9346a01e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.033259 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04df1a11-c308-49ba-a34a-74bf9346a01e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.033606 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv8sk\" (UniqueName: \"kubernetes.io/projected/04df1a11-c308-49ba-a34a-74bf9346a01e-kube-api-access-bv8sk\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:04 crc kubenswrapper[4754]: E0218 20:30:04.211451 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.438568 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" event={"ID":"04df1a11-c308-49ba-a34a-74bf9346a01e","Type":"ContainerDied","Data":"6aaab4659797f2ba89d1f93b046efb20e9f6e2343782a648f5ce4013b0f16e77"} Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.438626 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aaab4659797f2ba89d1f93b046efb20e9f6e2343782a648f5ce4013b0f16e77" Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.438634 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-t9884" Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.850811 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj"] Feb 18 20:30:04 crc kubenswrapper[4754]: I0218 20:30:04.863366 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-d7vvj"] Feb 18 20:30:06 crc kubenswrapper[4754]: I0218 20:30:06.221798 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3689e2-a4de-4632-b52b-9af35b45af82" path="/var/lib/kubelet/pods/8d3689e2-a4de-4632-b52b-9af35b45af82/volumes" Feb 18 20:30:14 crc kubenswrapper[4754]: I0218 20:30:14.212675 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:30:14 crc kubenswrapper[4754]: E0218 20:30:14.213740 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:30:17 crc kubenswrapper[4754]: E0218 20:30:17.214445 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:30:28 crc kubenswrapper[4754]: I0218 20:30:28.219742 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:30:28 crc kubenswrapper[4754]: E0218 20:30:28.220773 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:30:29 crc kubenswrapper[4754]: E0218 20:30:29.213078 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:30:40 crc kubenswrapper[4754]: I0218 20:30:40.033733 4754 scope.go:117] "RemoveContainer" containerID="d6d953c9cb519e8055b7ade7cc78f173b764da950e740b6dd9defa28f1d937d3" Feb 18 20:30:40 crc kubenswrapper[4754]: I0218 20:30:40.210383 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:30:40 crc kubenswrapper[4754]: E0218 20:30:40.210928 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:30:42 crc kubenswrapper[4754]: E0218 20:30:42.838544 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:30:42 crc kubenswrapper[4754]: E0218 20:30:42.839119 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:30:42 crc kubenswrapper[4754]: E0218 20:30:42.840269 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:30:51 crc kubenswrapper[4754]: I0218 20:30:51.211254 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:30:51 crc kubenswrapper[4754]: E0218 20:30:51.212635 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:30:56 crc kubenswrapper[4754]: E0218 20:30:56.212264 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:31:04 crc kubenswrapper[4754]: I0218 20:31:04.209838 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:31:04 crc kubenswrapper[4754]: E0218 20:31:04.210897 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:31:09 crc kubenswrapper[4754]: E0218 20:31:09.213310 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:31:15 crc kubenswrapper[4754]: I0218 20:31:15.211002 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:31:15 crc kubenswrapper[4754]: E0218 20:31:15.212448 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:31:21 crc kubenswrapper[4754]: E0218 20:31:21.211556 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:31:29 crc kubenswrapper[4754]: I0218 20:31:29.210386 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:31:29 crc kubenswrapper[4754]: E0218 20:31:29.211277 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.560989 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2p9z7"] Feb 18 20:31:30 crc kubenswrapper[4754]: E0218 20:31:30.562237 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04df1a11-c308-49ba-a34a-74bf9346a01e" containerName="collect-profiles" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.562255 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="04df1a11-c308-49ba-a34a-74bf9346a01e" containerName="collect-profiles" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.562492 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="04df1a11-c308-49ba-a34a-74bf9346a01e" containerName="collect-profiles" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.564134 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.592620 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p9z7"] Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.672207 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-catalog-content\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.672291 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsznl\" (UniqueName: \"kubernetes.io/projected/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-kube-api-access-jsznl\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.672417 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-utilities\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.774088 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-catalog-content\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.774372 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsznl\" (UniqueName: \"kubernetes.io/projected/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-kube-api-access-jsznl\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.774529 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-utilities\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.774587 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-catalog-content\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.774935 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-utilities\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.797009 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsznl\" (UniqueName: \"kubernetes.io/projected/4ad178cd-bde2-49ee-9738-5fa2c7e06b99-kube-api-access-jsznl\") pod \"redhat-marketplace-2p9z7\" (UID: \"4ad178cd-bde2-49ee-9738-5fa2c7e06b99\") " pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:30 crc kubenswrapper[4754]: I0218 20:31:30.885483 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2p9z7" Feb 18 20:31:31 crc kubenswrapper[4754]: I0218 20:31:31.181076 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2p9z7"] Feb 18 20:31:31 crc kubenswrapper[4754]: I0218 20:31:31.413026 4754 generic.go:334] "Generic (PLEG): container finished" podID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" containerID="5f768c11a4e6b69db2bd97db7e29d7b7321f667a18603653b3e8e72d9b259b64" exitCode=0 Feb 18 20:31:31 crc kubenswrapper[4754]: I0218 20:31:31.413317 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p9z7" event={"ID":"4ad178cd-bde2-49ee-9738-5fa2c7e06b99","Type":"ContainerDied","Data":"5f768c11a4e6b69db2bd97db7e29d7b7321f667a18603653b3e8e72d9b259b64"} Feb 18 20:31:31 crc kubenswrapper[4754]: I0218 20:31:31.413340 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2p9z7" event={"ID":"4ad178cd-bde2-49ee-9738-5fa2c7e06b99","Type":"ContainerStarted","Data":"6cf329f081f17bbc4ebb888f449a02e2348fb74f3705501ad184664f85558537"} Feb 18 20:31:32 crc kubenswrapper[4754]: E0218 20:31:32.050080 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:31:32 crc kubenswrapper[4754]: E0218 20:31:32.050351 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:32 crc kubenswrapper[4754]: E0218 20:31:32.051902 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:31:32 crc kubenswrapper[4754]: E0218 20:31:32.210882 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:31:32 crc kubenswrapper[4754]: E0218 20:31:32.422967 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:31:43 crc kubenswrapper[4754]: I0218 20:31:43.209591 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:31:43 crc kubenswrapper[4754]: E0218 20:31:43.210463 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:31:43 crc kubenswrapper[4754]: E0218 20:31:43.212297 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:31:47 crc kubenswrapper[4754]: E0218 20:31:47.906847 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:31:47 crc kubenswrapper[4754]: E0218 20:31:47.907523 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:31:47 crc kubenswrapper[4754]: E0218 20:31:47.908762 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:31:55 crc kubenswrapper[4754]: I0218 20:31:55.210505 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:31:55 crc kubenswrapper[4754]: E0218 20:31:55.211367 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:31:58 crc kubenswrapper[4754]: E0218 20:31:58.221237 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:31:58 crc kubenswrapper[4754]: E0218 20:31:58.221499 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:32:08 crc kubenswrapper[4754]: I0218 20:32:08.218545 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:32:08 crc kubenswrapper[4754]: E0218 20:32:08.224372 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:32:09 crc kubenswrapper[4754]: E0218 20:32:09.212007 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:32:11 crc kubenswrapper[4754]: E0218 20:32:11.933701 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:32:11 crc kubenswrapper[4754]: E0218 20:32:11.935196 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:11 crc kubenswrapper[4754]: E0218 20:32:11.936692 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:32:21 crc kubenswrapper[4754]: I0218 20:32:21.210597 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:32:21 crc kubenswrapper[4754]: E0218 20:32:21.211404 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:32:24 crc kubenswrapper[4754]: E0218 20:32:24.214934 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:32:25 crc kubenswrapper[4754]: E0218 20:32:25.211533 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:32:35 crc kubenswrapper[4754]: E0218 20:32:35.213026 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:32:36 crc kubenswrapper[4754]: I0218 20:32:36.211122 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:32:36 crc kubenswrapper[4754]: E0218 20:32:36.211629 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:32:40 crc kubenswrapper[4754]: E0218 20:32:40.223470 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:32:47 crc kubenswrapper[4754]: E0218 20:32:47.232630 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:32:48 crc kubenswrapper[4754]: I0218 20:32:48.222698 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:32:48 crc kubenswrapper[4754]: E0218 20:32:48.226250 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:32:55 crc kubenswrapper[4754]: E0218 20:32:55.702598 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:32:55 crc kubenswrapper[4754]: E0218 20:32:55.704432 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:32:55 crc kubenswrapper[4754]: E0218 20:32:55.705831 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:32:59 crc kubenswrapper[4754]: I0218 20:32:59.209830 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:32:59 crc kubenswrapper[4754]: E0218 20:32:59.210812 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:33:03 crc kubenswrapper[4754]: E0218 20:33:03.212440 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:33:09 crc kubenswrapper[4754]: E0218 20:33:09.214993 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:33:12 crc kubenswrapper[4754]: I0218 20:33:12.210067 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:33:12 crc kubenswrapper[4754]: E0218 20:33:12.210748 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:33:18 crc kubenswrapper[4754]: E0218 20:33:18.224278 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:33:21 crc kubenswrapper[4754]: E0218 20:33:21.211949 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:33:25 crc kubenswrapper[4754]: I0218 20:33:25.210685 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:33:25 crc kubenswrapper[4754]: E0218 20:33:25.211962 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:33:32 crc kubenswrapper[4754]: E0218 20:33:32.212302 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:33:33 crc kubenswrapper[4754]: E0218 20:33:33.351089 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:33:33 crc kubenswrapper[4754]: E0218 20:33:33.351591 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:33 crc kubenswrapper[4754]: E0218 20:33:33.353321 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:33:37 crc kubenswrapper[4754]: I0218 20:33:37.209854 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:33:37 crc kubenswrapper[4754]: E0218 20:33:37.210672 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.665596 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z5jpx"] Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.669872 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.677077 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5jpx"] Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.690120 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8jq\" (UniqueName: \"kubernetes.io/projected/970512fc-4847-4c9e-b049-af67e5001a91-kube-api-access-nb8jq\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.690552 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970512fc-4847-4c9e-b049-af67e5001a91-utilities\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.690633 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970512fc-4847-4c9e-b049-af67e5001a91-catalog-content\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.793349 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb8jq\" (UniqueName: \"kubernetes.io/projected/970512fc-4847-4c9e-b049-af67e5001a91-kube-api-access-nb8jq\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.793424 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970512fc-4847-4c9e-b049-af67e5001a91-utilities\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.793505 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970512fc-4847-4c9e-b049-af67e5001a91-catalog-content\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.793954 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/970512fc-4847-4c9e-b049-af67e5001a91-catalog-content\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.794082 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/970512fc-4847-4c9e-b049-af67e5001a91-utilities\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:41 crc kubenswrapper[4754]: I0218 20:33:41.824450 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb8jq\" (UniqueName: \"kubernetes.io/projected/970512fc-4847-4c9e-b049-af67e5001a91-kube-api-access-nb8jq\") pod \"certified-operators-z5jpx\" (UID: \"970512fc-4847-4c9e-b049-af67e5001a91\") " pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:42 crc kubenswrapper[4754]: I0218 20:33:42.005442 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5jpx" Feb 18 20:33:42 crc kubenswrapper[4754]: W0218 20:33:42.552208 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970512fc_4847_4c9e_b049_af67e5001a91.slice/crio-72b3758536b7a80ac2c409731635a702ebc48067f4107547d491d59c342cad22 WatchSource:0}: Error finding container 72b3758536b7a80ac2c409731635a702ebc48067f4107547d491d59c342cad22: Status 404 returned error can't find the container with id 72b3758536b7a80ac2c409731635a702ebc48067f4107547d491d59c342cad22 Feb 18 20:33:42 crc kubenswrapper[4754]: I0218 20:33:42.566918 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5jpx"] Feb 18 20:33:43 crc kubenswrapper[4754]: I0218 20:33:43.000249 4754 generic.go:334] "Generic (PLEG): container finished" podID="970512fc-4847-4c9e-b049-af67e5001a91" containerID="29525f126bc3ee56c5355fe3ae87d5665eab29ee8a05435db828fff6a418c2ee" exitCode=0 Feb 18 20:33:43 crc kubenswrapper[4754]: I0218 20:33:43.000302 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5jpx" event={"ID":"970512fc-4847-4c9e-b049-af67e5001a91","Type":"ContainerDied","Data":"29525f126bc3ee56c5355fe3ae87d5665eab29ee8a05435db828fff6a418c2ee"} Feb 18 20:33:43 crc kubenswrapper[4754]: I0218 20:33:43.000331 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5jpx" event={"ID":"970512fc-4847-4c9e-b049-af67e5001a91","Type":"ContainerStarted","Data":"72b3758536b7a80ac2c409731635a702ebc48067f4107547d491d59c342cad22"} Feb 18 20:33:43 crc kubenswrapper[4754]: I0218 20:33:43.002430 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:33:43 crc kubenswrapper[4754]: E0218 20:33:43.212413 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:33:44 crc kubenswrapper[4754]: E0218 20:33:44.067608 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:33:44 crc kubenswrapper[4754]: E0218 20:33:44.067872 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:44 crc kubenswrapper[4754]: E0218 20:33:44.069111 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:33:45 crc kubenswrapper[4754]: E0218 20:33:45.037780 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:33:46 crc kubenswrapper[4754]: E0218 20:33:46.215725 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:33:52 crc kubenswrapper[4754]: I0218 20:33:52.211499 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:33:52 crc kubenswrapper[4754]: E0218 20:33:52.212910 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:33:57 crc kubenswrapper[4754]: E0218 20:33:57.213294 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:33:58 crc kubenswrapper[4754]: E0218 20:33:58.190829 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:33:58 crc kubenswrapper[4754]: E0218 20:33:58.190997 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:33:58 crc kubenswrapper[4754]: E0218 20:33:58.193068 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:34:01 crc kubenswrapper[4754]: E0218 20:34:01.215042 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:34:07 crc kubenswrapper[4754]: I0218 20:34:07.209734 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:34:07 crc kubenswrapper[4754]: E0218 20:34:07.210839 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:34:12 crc kubenswrapper[4754]: E0218 20:34:12.212273 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:34:13 crc kubenswrapper[4754]: E0218 20:34:13.221600 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:34:16 crc kubenswrapper[4754]: E0218 20:34:16.212868 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:34:19 crc kubenswrapper[4754]: I0218 20:34:19.210267 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:34:20 crc kubenswrapper[4754]: I0218 20:34:20.397555 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"f3c809c76caa7b9194fcb2a3503104d7e30836515ff2a42357928f61132df615"} Feb 18 20:34:27 crc kubenswrapper[4754]: E0218 20:34:27.211455 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:34:28 crc kubenswrapper[4754]: E0218 20:34:28.049043 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:34:28 crc kubenswrapper[4754]: E0218 20:34:28.049682 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:28 crc kubenswrapper[4754]: E0218 20:34:28.051224 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:34:29 crc kubenswrapper[4754]: E0218 20:34:29.368554 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:34:29 crc kubenswrapper[4754]: E0218 20:34:29.368745 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:29 crc kubenswrapper[4754]: E0218 20:34:29.370055 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:34:40 crc kubenswrapper[4754]: E0218 20:34:40.215580 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:34:43 crc kubenswrapper[4754]: E0218 20:34:43.212130 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:34:44 crc kubenswrapper[4754]: E0218 20:34:44.210928 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:34:53 crc kubenswrapper[4754]: E0218 20:34:53.211737 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:34:55 crc kubenswrapper[4754]: E0218 20:34:55.219979 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:34:57 crc kubenswrapper[4754]: E0218 20:34:57.211750 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:35:06 crc kubenswrapper[4754]: E0218 20:35:06.216023 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:35:07 crc kubenswrapper[4754]: E0218 20:35:07.212961 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:35:10 crc kubenswrapper[4754]: E0218 20:35:10.214542 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:35:20 crc kubenswrapper[4754]: E0218 20:35:20.215510 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:35:23 crc kubenswrapper[4754]: E0218 20:35:23.077704 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:35:23 crc kubenswrapper[4754]: E0218 20:35:23.078669 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:23 crc kubenswrapper[4754]: E0218 20:35:23.079890 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:35:25 crc kubenswrapper[4754]: E0218 20:35:25.212263 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:35:35 crc kubenswrapper[4754]: E0218 20:35:35.212727 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:35:36 crc kubenswrapper[4754]: E0218 20:35:36.223496 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:35:39 crc kubenswrapper[4754]: E0218 20:35:39.214297 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:35:46 crc kubenswrapper[4754]: E0218 20:35:46.211542 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:35:50 crc kubenswrapper[4754]: E0218 20:35:50.212825 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:35:53 crc kubenswrapper[4754]: E0218 20:35:53.212733 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:35:58 crc kubenswrapper[4754]: E0218 20:35:58.219257 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:36:01 crc kubenswrapper[4754]: E0218 20:36:01.212018 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:36:04 crc kubenswrapper[4754]: E0218 20:36:04.212959 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.313244 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rcnjr"] Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.326920 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.329365 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcnjr"] Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.360590 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06d3dd8-6a39-44e5-9042-9673a602cab8-utilities\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.361008 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcw2x\" (UniqueName: \"kubernetes.io/projected/e06d3dd8-6a39-44e5-9042-9673a602cab8-kube-api-access-zcw2x\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.361076 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06d3dd8-6a39-44e5-9042-9673a602cab8-catalog-content\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.463232 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06d3dd8-6a39-44e5-9042-9673a602cab8-utilities\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.463306 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcw2x\" (UniqueName: \"kubernetes.io/projected/e06d3dd8-6a39-44e5-9042-9673a602cab8-kube-api-access-zcw2x\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.463359 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06d3dd8-6a39-44e5-9042-9673a602cab8-catalog-content\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.463859 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e06d3dd8-6a39-44e5-9042-9673a602cab8-catalog-content\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.464731 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e06d3dd8-6a39-44e5-9042-9673a602cab8-utilities\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.510796 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcw2x\" (UniqueName: \"kubernetes.io/projected/e06d3dd8-6a39-44e5-9042-9673a602cab8-kube-api-access-zcw2x\") pod \"community-operators-rcnjr\" (UID: \"e06d3dd8-6a39-44e5-9042-9673a602cab8\") " pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:05 crc kubenswrapper[4754]: I0218 20:36:05.659913 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcnjr" Feb 18 20:36:06 crc kubenswrapper[4754]: I0218 20:36:06.175348 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcnjr"] Feb 18 20:36:06 crc kubenswrapper[4754]: W0218 20:36:06.181307 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode06d3dd8_6a39_44e5_9042_9673a602cab8.slice/crio-aaf35b572104be590eccf3f5fb8fae89f61dfb7d2e0e93843863dd2cd1e2f221 WatchSource:0}: Error finding container aaf35b572104be590eccf3f5fb8fae89f61dfb7d2e0e93843863dd2cd1e2f221: Status 404 returned error can't find the container with id aaf35b572104be590eccf3f5fb8fae89f61dfb7d2e0e93843863dd2cd1e2f221 Feb 18 20:36:06 crc kubenswrapper[4754]: I0218 20:36:06.722984 4754 generic.go:334] "Generic (PLEG): container finished" podID="e06d3dd8-6a39-44e5-9042-9673a602cab8" containerID="aa6b44ce680f189aa4b851faf83d38d222bfb45caacd2165a8aed76cb099b19a" exitCode=0 Feb 18 20:36:06 crc kubenswrapper[4754]: I0218 20:36:06.723067 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcnjr" event={"ID":"e06d3dd8-6a39-44e5-9042-9673a602cab8","Type":"ContainerDied","Data":"aa6b44ce680f189aa4b851faf83d38d222bfb45caacd2165a8aed76cb099b19a"} Feb 18 20:36:06 crc kubenswrapper[4754]: I0218 20:36:06.723115 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcnjr" event={"ID":"e06d3dd8-6a39-44e5-9042-9673a602cab8","Type":"ContainerStarted","Data":"aaf35b572104be590eccf3f5fb8fae89f61dfb7d2e0e93843863dd2cd1e2f221"} Feb 18 20:36:08 crc kubenswrapper[4754]: E0218 20:36:08.691644 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:08 crc kubenswrapper[4754]: E0218 20:36:08.692043 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rcnjr_openshift-marketplace(e06d3dd8-6a39-44e5-9042-9673a602cab8): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:08 crc kubenswrapper[4754]: E0218 20:36:08.693824 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:36:08 crc kubenswrapper[4754]: E0218 20:36:08.752431 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:36:10 crc kubenswrapper[4754]: E0218 20:36:10.211174 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:36:15 crc kubenswrapper[4754]: E0218 20:36:15.213155 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:36:16 crc kubenswrapper[4754]: E0218 20:36:16.212613 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:36:21 crc kubenswrapper[4754]: E0218 20:36:21.150690 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:21 crc kubenswrapper[4754]: E0218 20:36:21.151423 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rcnjr_openshift-marketplace(e06d3dd8-6a39-44e5-9042-9673a602cab8): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:21 crc kubenswrapper[4754]: E0218 20:36:21.152603 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:36:22 crc kubenswrapper[4754]: E0218 20:36:22.212806 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:36:26 crc kubenswrapper[4754]: E0218 20:36:26.213912 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:36:27 crc kubenswrapper[4754]: E0218 20:36:27.211867 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:36:35 crc kubenswrapper[4754]: E0218 20:36:35.220618 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:36:35 crc kubenswrapper[4754]: E0218 20:36:35.220648 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:36:38 crc kubenswrapper[4754]: I0218 20:36:38.096282 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:36:38 crc kubenswrapper[4754]: I0218 20:36:38.096580 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:36:39 crc kubenswrapper[4754]: E0218 20:36:39.212135 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:36:40 crc kubenswrapper[4754]: E0218 20:36:40.212869 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:36:49 crc kubenswrapper[4754]: E0218 20:36:49.212072 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:36:49 crc kubenswrapper[4754]: E0218 20:36:49.854243 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:49 crc kubenswrapper[4754]: E0218 20:36:49.854687 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rcnjr_openshift-marketplace(e06d3dd8-6a39-44e5-9042-9673a602cab8): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:49 crc kubenswrapper[4754]: E0218 20:36:49.855940 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:36:52 crc kubenswrapper[4754]: E0218 20:36:52.212189 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:36:52 crc kubenswrapper[4754]: E0218 20:36:52.962397 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:36:52 crc kubenswrapper[4754]: E0218 20:36:52.968767 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:52 crc kubenswrapper[4754]: E0218 20:36:52.970102 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:37:03 crc kubenswrapper[4754]: E0218 20:37:03.213456 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:37:03 crc kubenswrapper[4754]: E0218 20:37:03.213727 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:37:04 crc kubenswrapper[4754]: E0218 20:37:04.212081 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:37:07 crc kubenswrapper[4754]: E0218 20:37:07.232060 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:37:08 crc kubenswrapper[4754]: I0218 20:37:08.096282 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:37:08 crc kubenswrapper[4754]: I0218 20:37:08.096605 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:37:17 crc kubenswrapper[4754]: E0218 20:37:17.213408 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:37:17 crc kubenswrapper[4754]: E0218 20:37:17.213465 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:37:18 crc kubenswrapper[4754]: E0218 20:37:18.224483 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:37:18 crc kubenswrapper[4754]: E0218 20:37:18.939278 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:37:18 crc kubenswrapper[4754]: E0218 20:37:18.939677 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:18 crc kubenswrapper[4754]: E0218 20:37:18.940923 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:37:31 crc kubenswrapper[4754]: E0218 20:37:31.212689 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:37:32 crc kubenswrapper[4754]: E0218 20:37:32.222446 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:37:32 crc kubenswrapper[4754]: E0218 20:37:32.226511 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:37:32 crc kubenswrapper[4754]: E0218 20:37:32.971464 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:37:32 crc kubenswrapper[4754]: E0218 20:37:32.971626 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rcnjr_openshift-marketplace(e06d3dd8-6a39-44e5-9042-9673a602cab8): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:32 crc kubenswrapper[4754]: E0218 20:37:32.972877 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.096923 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.097512 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.097576 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.098470 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3c809c76caa7b9194fcb2a3503104d7e30836515ff2a42357928f61132df615"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.098569 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://f3c809c76caa7b9194fcb2a3503104d7e30836515ff2a42357928f61132df615" gracePeriod=600 Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.708509 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="f3c809c76caa7b9194fcb2a3503104d7e30836515ff2a42357928f61132df615" exitCode=0 Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.708795 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"f3c809c76caa7b9194fcb2a3503104d7e30836515ff2a42357928f61132df615"} Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.708822 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerStarted","Data":"7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15"} Feb 18 20:37:38 crc kubenswrapper[4754]: I0218 20:37:38.708838 4754 scope.go:117] "RemoveContainer" containerID="3b2cb1f0c371c63cc99221f2ec0e9b9d15fc6adc19ab240f96aeb51eac80df6e" Feb 18 20:37:42 crc kubenswrapper[4754]: E0218 20:37:42.213527 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:37:46 crc kubenswrapper[4754]: E0218 20:37:46.214471 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:37:46 crc kubenswrapper[4754]: E0218 20:37:46.214699 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:37:46 crc kubenswrapper[4754]: E0218 20:37:46.214810 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:37:53 crc kubenswrapper[4754]: E0218 20:37:53.213186 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:37:59 crc kubenswrapper[4754]: E0218 20:37:59.212843 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:37:59 crc kubenswrapper[4754]: E0218 20:37:59.212957 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:38:01 crc kubenswrapper[4754]: E0218 20:38:01.211933 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:38:07 crc kubenswrapper[4754]: E0218 20:38:07.212599 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:38:12 crc kubenswrapper[4754]: E0218 20:38:12.214189 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:38:13 crc kubenswrapper[4754]: E0218 20:38:13.211808 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:38:16 crc kubenswrapper[4754]: E0218 20:38:16.211605 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:38:18 crc kubenswrapper[4754]: E0218 20:38:18.215085 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:38:24 crc kubenswrapper[4754]: E0218 20:38:24.212925 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:38:27 crc kubenswrapper[4754]: E0218 20:38:27.211043 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:38:28 crc kubenswrapper[4754]: E0218 20:38:28.226568 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:38:29 crc kubenswrapper[4754]: E0218 20:38:29.212522 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:38:36 crc kubenswrapper[4754]: E0218 20:38:36.211186 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:38:39 crc kubenswrapper[4754]: E0218 20:38:39.211785 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:38:42 crc kubenswrapper[4754]: E0218 20:38:42.214100 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:38:43 crc kubenswrapper[4754]: E0218 20:38:43.512051 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:38:43 crc kubenswrapper[4754]: E0218 20:38:43.512326 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:43 crc kubenswrapper[4754]: E0218 20:38:43.513531 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:38:51 crc kubenswrapper[4754]: E0218 20:38:51.212205 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:38:51 crc kubenswrapper[4754]: E0218 20:38:51.212703 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:38:56 crc kubenswrapper[4754]: E0218 20:38:56.213296 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:38:57 crc kubenswrapper[4754]: E0218 20:38:57.211301 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:39:03 crc kubenswrapper[4754]: I0218 20:39:03.213709 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:39:04 crc kubenswrapper[4754]: E0218 20:39:04.940894 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:39:04 crc kubenswrapper[4754]: E0218 20:39:04.941456 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rcnjr_openshift-marketplace(e06d3dd8-6a39-44e5-9042-9673a602cab8): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:04 crc kubenswrapper[4754]: E0218 20:39:04.943223 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:39:05 crc kubenswrapper[4754]: E0218 20:39:05.211859 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:39:09 crc kubenswrapper[4754]: E0218 20:39:09.216017 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:39:11 crc kubenswrapper[4754]: E0218 20:39:11.211756 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:39:17 crc kubenswrapper[4754]: E0218 20:39:17.211617 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:39:17 crc kubenswrapper[4754]: E0218 20:39:17.211756 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:39:21 crc kubenswrapper[4754]: I0218 20:39:21.807575 4754 generic.go:334] "Generic (PLEG): container finished" podID="1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" containerID="b71feadadccecf3df28b71573caea88f0c868aec59a06cf3b9ba391616c3ec34" exitCode=0 Feb 18 20:39:21 crc kubenswrapper[4754]: I0218 20:39:21.807670 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1","Type":"ContainerDied","Data":"b71feadadccecf3df28b71573caea88f0c868aec59a06cf3b9ba391616c3ec34"} Feb 18 20:39:22 crc kubenswrapper[4754]: E0218 20:39:22.219454 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:39:23 crc kubenswrapper[4754]: E0218 20:39:23.211703 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.239022 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402201 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ca-certs\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402314 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-workdir\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402390 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85wxt\" (UniqueName: \"kubernetes.io/projected/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-kube-api-access-85wxt\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402431 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402460 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config-secret\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402596 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-temporary\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402637 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402733 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-config-data\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.402778 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ssh-key\") pod \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\" (UID: \"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1\") " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.403278 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.404501 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-config-data" (OuterVolumeSpecName: "config-data") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.416384 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.421905 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-kube-api-access-85wxt" (OuterVolumeSpecName: "kube-api-access-85wxt") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "kube-api-access-85wxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.449059 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.465373 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.466824 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.495659 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.502544 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" (UID: "1226d67e-0705-4bf9-b00a-9f7acd8bdfc1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.504878 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85wxt\" (UniqueName: \"kubernetes.io/projected/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-kube-api-access-85wxt\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.504929 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.504948 4754 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.504967 4754 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.505026 4754 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.505045 4754 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.505060 4754 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.505075 4754 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.505091 4754 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1226d67e-0705-4bf9-b00a-9f7acd8bdfc1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.533105 4754 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.607213 4754 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.835537 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1226d67e-0705-4bf9-b00a-9f7acd8bdfc1","Type":"ContainerDied","Data":"1e012fd1949b4b411202d8776015968e3ac3211911a5a34ece7bec78bdf38ce5"} Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.835574 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e012fd1949b4b411202d8776015968e3ac3211911a5a34ece7bec78bdf38ce5" Feb 18 20:39:23 crc kubenswrapper[4754]: I0218 20:39:23.835573 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.864888 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:39:25 crc kubenswrapper[4754]: E0218 20:39:25.865629 4754 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.865645 4754 state_mem.go:107] "Deleted CPUSet assignment" podUID="1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.865934 4754 memory_manager.go:354] "RemoveStaleState removing state" podUID="1226d67e-0705-4bf9-b00a-9f7acd8bdfc1" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.866830 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.869218 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mrfb5" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.876010 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.956981 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmqk\" (UniqueName: \"kubernetes.io/projected/2bd39ede-50e1-4ecd-aeee-c4693ed571e4-kube-api-access-2hmqk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:25 crc kubenswrapper[4754]: I0218 20:39:25.957067 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.059671 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmqk\" (UniqueName: \"kubernetes.io/projected/2bd39ede-50e1-4ecd-aeee-c4693ed571e4-kube-api-access-2hmqk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.059818 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.060213 4754 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.107452 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmqk\" (UniqueName: \"kubernetes.io/projected/2bd39ede-50e1-4ecd-aeee-c4693ed571e4-kube-api-access-2hmqk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.107460 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"2bd39ede-50e1-4ecd-aeee-c4693ed571e4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.190249 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:39:26 crc kubenswrapper[4754]: W0218 20:39:26.634118 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bd39ede_50e1_4ecd_aeee_c4693ed571e4.slice/crio-5a962ea86b0300b74ed15954804cd33bb189f6cd60c4b93b34e589c444723ec9 WatchSource:0}: Error finding container 5a962ea86b0300b74ed15954804cd33bb189f6cd60c4b93b34e589c444723ec9: Status 404 returned error can't find the container with id 5a962ea86b0300b74ed15954804cd33bb189f6cd60c4b93b34e589c444723ec9 Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.634932 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:39:26 crc kubenswrapper[4754]: I0218 20:39:26.870109 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"2bd39ede-50e1-4ecd-aeee-c4693ed571e4","Type":"ContainerStarted","Data":"5a962ea86b0300b74ed15954804cd33bb189f6cd60c4b93b34e589c444723ec9"} Feb 18 20:39:27 crc kubenswrapper[4754]: E0218 20:39:27.674475 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:39:27 crc kubenswrapper[4754]: E0218 20:39:27.674755 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(2bd39ede-50e1-4ecd-aeee-c4693ed571e4): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:27 crc kubenswrapper[4754]: E0218 20:39:27.675994 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:39:27 crc kubenswrapper[4754]: E0218 20:39:27.881224 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:39:28 crc kubenswrapper[4754]: E0218 20:39:28.225374 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:39:32 crc kubenswrapper[4754]: E0218 20:39:32.213387 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:39:34 crc kubenswrapper[4754]: E0218 20:39:34.212981 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:39:36 crc kubenswrapper[4754]: E0218 20:39:36.213156 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:39:38 crc kubenswrapper[4754]: I0218 20:39:38.096523 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:39:38 crc kubenswrapper[4754]: I0218 20:39:38.096959 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:39:40 crc kubenswrapper[4754]: E0218 20:39:40.849927 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:39:40 crc kubenswrapper[4754]: E0218 20:39:40.850706 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:40 crc kubenswrapper[4754]: E0218 20:39:40.851879 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:39:42 crc kubenswrapper[4754]: E0218 20:39:42.202251 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:39:42 crc kubenswrapper[4754]: E0218 20:39:42.202854 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(2bd39ede-50e1-4ecd-aeee-c4693ed571e4): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:42 crc kubenswrapper[4754]: E0218 20:39:42.204563 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:39:44 crc kubenswrapper[4754]: E0218 20:39:44.212189 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:39:47 crc kubenswrapper[4754]: E0218 20:39:47.214602 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:39:48 crc kubenswrapper[4754]: E0218 20:39:48.220112 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:39:56 crc kubenswrapper[4754]: E0218 20:39:56.211529 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:39:57 crc kubenswrapper[4754]: E0218 20:39:57.211345 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:39:59 crc kubenswrapper[4754]: E0218 20:39:59.211968 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:39:59 crc kubenswrapper[4754]: E0218 20:39:59.212086 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:40:02 crc kubenswrapper[4754]: E0218 20:40:02.214690 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:40:08 crc kubenswrapper[4754]: I0218 20:40:08.096190 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:40:08 crc kubenswrapper[4754]: I0218 20:40:08.096603 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:40:10 crc kubenswrapper[4754]: E0218 20:40:10.215481 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:40:11 crc kubenswrapper[4754]: E0218 20:40:11.211971 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:40:12 crc kubenswrapper[4754]: E0218 20:40:12.765334 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:40:12 crc kubenswrapper[4754]: E0218 20:40:12.765830 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(2bd39ede-50e1-4ecd-aeee-c4693ed571e4): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:12 crc kubenswrapper[4754]: E0218 20:40:12.767120 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:40:14 crc kubenswrapper[4754]: E0218 20:40:14.212967 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:40:14 crc kubenswrapper[4754]: E0218 20:40:14.214025 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:40:23 crc kubenswrapper[4754]: E0218 20:40:23.218965 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:40:23 crc kubenswrapper[4754]: E0218 20:40:23.225480 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:40:24 crc kubenswrapper[4754]: E0218 20:40:24.214266 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:40:27 crc kubenswrapper[4754]: E0218 20:40:27.213337 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:40:27 crc kubenswrapper[4754]: E0218 20:40:27.214284 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:40:34 crc kubenswrapper[4754]: E0218 20:40:34.214682 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:40:37 crc kubenswrapper[4754]: E0218 20:40:37.213737 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.096347 4754 patch_prober.go:28] interesting pod/machine-config-daemon-wmjxr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.096720 4754 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.096792 4754 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.097647 4754 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15"} pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.097753 4754 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerName="machine-config-daemon" containerID="cri-o://7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" gracePeriod=600 Feb 18 20:40:38 crc kubenswrapper[4754]: E0218 20:40:38.222535 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:40:38 crc kubenswrapper[4754]: E0218 20:40:38.253615 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.735831 4754 generic.go:334] "Generic (PLEG): container finished" podID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" exitCode=0 Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.735879 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" event={"ID":"5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8","Type":"ContainerDied","Data":"7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15"} Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.735973 4754 scope.go:117] "RemoveContainer" containerID="f3c809c76caa7b9194fcb2a3503104d7e30836515ff2a42357928f61132df615" Feb 18 20:40:38 crc kubenswrapper[4754]: I0218 20:40:38.736861 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:40:38 crc kubenswrapper[4754]: E0218 20:40:38.737272 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:40:39 crc kubenswrapper[4754]: E0218 20:40:39.211797 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:40:40 crc kubenswrapper[4754]: E0218 20:40:40.213648 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:40:45 crc kubenswrapper[4754]: E0218 20:40:45.215514 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:40:48 crc kubenswrapper[4754]: E0218 20:40:48.226236 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:40:49 crc kubenswrapper[4754]: E0218 20:40:49.211624 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:40:53 crc kubenswrapper[4754]: I0218 20:40:53.210273 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:40:53 crc kubenswrapper[4754]: E0218 20:40:53.211104 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:40:53 crc kubenswrapper[4754]: E0218 20:40:53.212755 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:40:55 crc kubenswrapper[4754]: E0218 20:40:55.211843 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:40:56 crc kubenswrapper[4754]: E0218 20:40:56.211805 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:41:00 crc kubenswrapper[4754]: E0218 20:41:00.220704 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:41:04 crc kubenswrapper[4754]: E0218 20:41:04.098088 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:41:04 crc kubenswrapper[4754]: E0218 20:41:04.098976 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(2bd39ede-50e1-4ecd-aeee-c4693ed571e4): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:04 crc kubenswrapper[4754]: E0218 20:41:04.100369 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:41:04 crc kubenswrapper[4754]: E0218 20:41:04.211354 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:41:06 crc kubenswrapper[4754]: I0218 20:41:06.209847 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:41:06 crc kubenswrapper[4754]: E0218 20:41:06.210523 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:41:06 crc kubenswrapper[4754]: E0218 20:41:06.214677 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:41:09 crc kubenswrapper[4754]: E0218 20:41:09.212472 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:41:15 crc kubenswrapper[4754]: E0218 20:41:15.214531 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:41:15 crc kubenswrapper[4754]: E0218 20:41:15.220448 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:41:15 crc kubenswrapper[4754]: E0218 20:41:15.220767 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:41:19 crc kubenswrapper[4754]: I0218 20:41:19.209376 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:41:19 crc kubenswrapper[4754]: E0218 20:41:19.210168 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:41:20 crc kubenswrapper[4754]: E0218 20:41:20.211882 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:41:22 crc kubenswrapper[4754]: E0218 20:41:22.211919 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:41:26 crc kubenswrapper[4754]: E0218 20:41:26.212857 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:41:28 crc kubenswrapper[4754]: E0218 20:41:28.219296 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:41:28 crc kubenswrapper[4754]: E0218 20:41:28.219575 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:41:31 crc kubenswrapper[4754]: I0218 20:41:31.209495 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:41:31 crc kubenswrapper[4754]: E0218 20:41:31.209935 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:41:33 crc kubenswrapper[4754]: E0218 20:41:33.212065 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:41:34 crc kubenswrapper[4754]: E0218 20:41:34.211651 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:41:39 crc kubenswrapper[4754]: E0218 20:41:39.214852 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:41:40 crc kubenswrapper[4754]: E0218 20:41:40.215475 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:41:42 crc kubenswrapper[4754]: I0218 20:41:42.210095 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:41:42 crc kubenswrapper[4754]: E0218 20:41:42.211263 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:41:42 crc kubenswrapper[4754]: E0218 20:41:42.212596 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:41:46 crc kubenswrapper[4754]: E0218 20:41:46.729617 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:41:46 crc kubenswrapper[4754]: E0218 20:41:46.731472 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcw2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rcnjr_openshift-marketplace(e06d3dd8-6a39-44e5-9042-9673a602cab8): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:46 crc kubenswrapper[4754]: E0218 20:41:46.732838 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:41:48 crc kubenswrapper[4754]: E0218 20:41:48.224087 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:41:51 crc kubenswrapper[4754]: E0218 20:41:51.212808 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:41:53 crc kubenswrapper[4754]: E0218 20:41:53.212806 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.020832 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xlzx5/must-gather-r5mkj"] Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.023119 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.031117 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xlzx5"/"openshift-service-ca.crt" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.031301 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xlzx5"/"kube-root-ca.crt" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.031357 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xlzx5"/"default-dockercfg-2p4l2" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.056788 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xlzx5/must-gather-r5mkj"] Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.096459 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546wz\" (UniqueName: \"kubernetes.io/projected/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-kube-api-access-546wz\") pod \"must-gather-r5mkj\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.096566 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-must-gather-output\") pod \"must-gather-r5mkj\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.198744 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-must-gather-output\") pod \"must-gather-r5mkj\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.198876 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546wz\" (UniqueName: \"kubernetes.io/projected/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-kube-api-access-546wz\") pod \"must-gather-r5mkj\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.199228 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-must-gather-output\") pod \"must-gather-r5mkj\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: E0218 20:41:56.211879 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.223894 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546wz\" (UniqueName: \"kubernetes.io/projected/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-kube-api-access-546wz\") pod \"must-gather-r5mkj\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.352575 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:41:56 crc kubenswrapper[4754]: I0218 20:41:56.854824 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xlzx5/must-gather-r5mkj"] Feb 18 20:41:56 crc kubenswrapper[4754]: W0218 20:41:56.855709 4754 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d6b8a2c_fcd2_4198_a654_52d40ddcda1b.slice/crio-c5f682fd06a73901208c6f738e18ce649c49f38f7d948c936ed62989d8b5ca88 WatchSource:0}: Error finding container c5f682fd06a73901208c6f738e18ce649c49f38f7d948c936ed62989d8b5ca88: Status 404 returned error can't find the container with id c5f682fd06a73901208c6f738e18ce649c49f38f7d948c936ed62989d8b5ca88 Feb 18 20:41:57 crc kubenswrapper[4754]: I0218 20:41:57.210280 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:41:57 crc kubenswrapper[4754]: E0218 20:41:57.210544 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:41:57 crc kubenswrapper[4754]: I0218 20:41:57.537927 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" event={"ID":"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b","Type":"ContainerStarted","Data":"c5f682fd06a73901208c6f738e18ce649c49f38f7d948c936ed62989d8b5ca88"} Feb 18 20:41:59 crc kubenswrapper[4754]: E0218 20:41:59.211496 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:41:59 crc kubenswrapper[4754]: E0218 20:41:59.222282 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 20:41:59 crc kubenswrapper[4754]: E0218 20:41:59.222484 4754 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 20:41:59 crc kubenswrapper[4754]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 20:41:59 crc kubenswrapper[4754]: HAVE_SESSION_TOOLS=true Feb 18 20:41:59 crc kubenswrapper[4754]: else Feb 18 20:41:59 crc kubenswrapper[4754]: HAVE_SESSION_TOOLS=false Feb 18 20:41:59 crc kubenswrapper[4754]: fi Feb 18 20:41:59 crc kubenswrapper[4754]: Feb 18 20:41:59 crc kubenswrapper[4754]: Feb 18 20:41:59 crc kubenswrapper[4754]: echo "[disk usage checker] Started" Feb 18 20:41:59 crc kubenswrapper[4754]: target_dir="/must-gather" Feb 18 20:41:59 crc kubenswrapper[4754]: usage_percentage_limit="80" Feb 18 20:41:59 crc kubenswrapper[4754]: while true; do Feb 18 20:41:59 crc kubenswrapper[4754]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 20:41:59 crc kubenswrapper[4754]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 20:41:59 crc kubenswrapper[4754]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 20:41:59 crc kubenswrapper[4754]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 20:41:59 crc kubenswrapper[4754]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:41:59 crc kubenswrapper[4754]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 20:41:59 crc kubenswrapper[4754]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 20:41:59 crc kubenswrapper[4754]: pkill --signal SIGKILL --session "$sid" Feb 18 20:41:59 crc kubenswrapper[4754]: done Feb 18 20:41:59 crc kubenswrapper[4754]: else Feb 18 20:41:59 crc kubenswrapper[4754]: kill 0 Feb 18 20:41:59 crc kubenswrapper[4754]: fi Feb 18 20:41:59 crc kubenswrapper[4754]: exit 1 Feb 18 20:41:59 crc kubenswrapper[4754]: fi Feb 18 20:41:59 crc kubenswrapper[4754]: sleep 5 Feb 18 20:41:59 crc kubenswrapper[4754]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:41:59 crc kubenswrapper[4754]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 20:41:59 crc kubenswrapper[4754]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:41:59 crc kubenswrapper[4754]: MUSTGATHER_EOF Feb 18 20:41:59 crc kubenswrapper[4754]: else Feb 18 20:41:59 crc kubenswrapper[4754]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:41:59 crc kubenswrapper[4754]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-546wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-r5mkj_openshift-must-gather-xlzx5(3d6b8a2c-fcd2-4198-a654-52d40ddcda1b): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error Feb 18 20:41:59 crc kubenswrapper[4754]: > logger="UnhandledError" Feb 18 20:41:59 crc kubenswrapper[4754]: E0218 20:41:59.224540 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" podUID="3d6b8a2c-fcd2-4198-a654-52d40ddcda1b" Feb 18 20:41:59 crc kubenswrapper[4754]: E0218 20:41:59.561346 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" podUID="3d6b8a2c-fcd2-4198-a654-52d40ddcda1b" Feb 18 20:42:01 crc kubenswrapper[4754]: E0218 20:42:01.211910 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:42:05 crc kubenswrapper[4754]: E0218 20:42:05.215061 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:42:08 crc kubenswrapper[4754]: E0218 20:42:08.220102 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:42:09 crc kubenswrapper[4754]: E0218 20:42:09.211897 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:42:11 crc kubenswrapper[4754]: I0218 20:42:11.519201 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xlzx5/must-gather-r5mkj"] Feb 18 20:42:11 crc kubenswrapper[4754]: I0218 20:42:11.536218 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xlzx5/must-gather-r5mkj"] Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.080654 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.203850 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-must-gather-output\") pod \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.203895 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-546wz\" (UniqueName: \"kubernetes.io/projected/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-kube-api-access-546wz\") pod \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\" (UID: \"3d6b8a2c-fcd2-4198-a654-52d40ddcda1b\") " Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.204193 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3d6b8a2c-fcd2-4198-a654-52d40ddcda1b" (UID: "3d6b8a2c-fcd2-4198-a654-52d40ddcda1b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.204446 4754 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.210106 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.210557 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-kube-api-access-546wz" (OuterVolumeSpecName: "kube-api-access-546wz") pod "3d6b8a2c-fcd2-4198-a654-52d40ddcda1b" (UID: "3d6b8a2c-fcd2-4198-a654-52d40ddcda1b"). InnerVolumeSpecName "kube-api-access-546wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:42:12 crc kubenswrapper[4754]: E0218 20:42:12.210630 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.224237 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6b8a2c-fcd2-4198-a654-52d40ddcda1b" path="/var/lib/kubelet/pods/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b/volumes" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.306663 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-546wz\" (UniqueName: \"kubernetes.io/projected/3d6b8a2c-fcd2-4198-a654-52d40ddcda1b-kube-api-access-546wz\") on node \"crc\" DevicePath \"\"" Feb 18 20:42:12 crc kubenswrapper[4754]: I0218 20:42:12.685769 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xlzx5/must-gather-r5mkj" Feb 18 20:42:13 crc kubenswrapper[4754]: E0218 20:42:13.212335 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:42:16 crc kubenswrapper[4754]: E0218 20:42:16.212277 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:42:19 crc kubenswrapper[4754]: E0218 20:42:19.212347 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:42:20 crc kubenswrapper[4754]: E0218 20:42:20.212700 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:42:20 crc kubenswrapper[4754]: E0218 20:42:20.212751 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:42:23 crc kubenswrapper[4754]: I0218 20:42:23.210078 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:42:23 crc kubenswrapper[4754]: E0218 20:42:23.211026 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:42:29 crc kubenswrapper[4754]: E0218 20:42:29.211267 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:42:29 crc kubenswrapper[4754]: E0218 20:42:29.910911 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:42:29 crc kubenswrapper[4754]: E0218 20:42:29.911815 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsznl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2p9z7_openshift-marketplace(4ad178cd-bde2-49ee-9738-5fa2c7e06b99): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:42:29 crc kubenswrapper[4754]: E0218 20:42:29.916227 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:42:31 crc kubenswrapper[4754]: E0218 20:42:31.213215 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:42:33 crc kubenswrapper[4754]: E0218 20:42:33.506821 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:42:33 crc kubenswrapper[4754]: E0218 20:42:33.507254 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(2bd39ede-50e1-4ecd-aeee-c4693ed571e4): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:42:33 crc kubenswrapper[4754]: E0218 20:42:33.508457 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:42:34 crc kubenswrapper[4754]: I0218 20:42:34.212094 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:42:34 crc kubenswrapper[4754]: E0218 20:42:34.212923 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:42:34 crc kubenswrapper[4754]: E0218 20:42:34.212999 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:42:41 crc kubenswrapper[4754]: E0218 20:42:41.212095 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:42:43 crc kubenswrapper[4754]: E0218 20:42:43.211185 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:42:45 crc kubenswrapper[4754]: E0218 20:42:45.211948 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:42:46 crc kubenswrapper[4754]: E0218 20:42:46.213974 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:42:47 crc kubenswrapper[4754]: E0218 20:42:47.212552 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:42:49 crc kubenswrapper[4754]: I0218 20:42:49.210099 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:42:49 crc kubenswrapper[4754]: E0218 20:42:49.210774 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:42:52 crc kubenswrapper[4754]: E0218 20:42:52.213472 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:42:56 crc kubenswrapper[4754]: E0218 20:42:56.213352 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:42:59 crc kubenswrapper[4754]: E0218 20:42:59.212682 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:43:00 crc kubenswrapper[4754]: E0218 20:43:00.212808 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:43:00 crc kubenswrapper[4754]: I0218 20:43:00.426238 4754 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6766dcd7b5-n5xnp" podUID="aa2625da-c499-4e0d-a13c-15aae4128a26" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 20:43:01 crc kubenswrapper[4754]: E0218 20:43:01.212028 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:43:04 crc kubenswrapper[4754]: I0218 20:43:04.211909 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:43:04 crc kubenswrapper[4754]: E0218 20:43:04.212488 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:43:04 crc kubenswrapper[4754]: E0218 20:43:04.214023 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:43:09 crc kubenswrapper[4754]: E0218 20:43:09.214228 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:43:11 crc kubenswrapper[4754]: E0218 20:43:11.213894 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:43:13 crc kubenswrapper[4754]: E0218 20:43:13.210990 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:43:13 crc kubenswrapper[4754]: E0218 20:43:13.211385 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:43:15 crc kubenswrapper[4754]: E0218 20:43:15.211705 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:43:16 crc kubenswrapper[4754]: I0218 20:43:16.210464 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:43:16 crc kubenswrapper[4754]: E0218 20:43:16.210860 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:43:20 crc kubenswrapper[4754]: E0218 20:43:20.213020 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:43:22 crc kubenswrapper[4754]: E0218 20:43:22.212870 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:43:26 crc kubenswrapper[4754]: E0218 20:43:26.210774 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:43:28 crc kubenswrapper[4754]: E0218 20:43:28.219964 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:43:29 crc kubenswrapper[4754]: E0218 20:43:29.212192 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:43:31 crc kubenswrapper[4754]: I0218 20:43:31.210873 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:43:31 crc kubenswrapper[4754]: E0218 20:43:31.211703 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:43:31 crc kubenswrapper[4754]: E0218 20:43:31.212671 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:43:35 crc kubenswrapper[4754]: E0218 20:43:35.212276 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:43:39 crc kubenswrapper[4754]: E0218 20:43:39.213084 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:43:40 crc kubenswrapper[4754]: E0218 20:43:40.214295 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:43:41 crc kubenswrapper[4754]: E0218 20:43:41.212238 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:43:42 crc kubenswrapper[4754]: E0218 20:43:42.211976 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:43:43 crc kubenswrapper[4754]: I0218 20:43:43.209861 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:43:43 crc kubenswrapper[4754]: E0218 20:43:43.210135 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:43:48 crc kubenswrapper[4754]: E0218 20:43:48.232165 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:43:53 crc kubenswrapper[4754]: E0218 20:43:53.212514 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:43:54 crc kubenswrapper[4754]: E0218 20:43:54.210666 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:43:54 crc kubenswrapper[4754]: I0218 20:43:54.211571 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:43:54 crc kubenswrapper[4754]: E0218 20:43:54.212050 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:43:54 crc kubenswrapper[4754]: E0218 20:43:54.214865 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:43:55 crc kubenswrapper[4754]: E0218 20:43:55.797753 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Feb 18 20:43:55 crc kubenswrapper[4754]: E0218 20:43:55.798532 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jcbdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sx4rr_openshift-marketplace(63eff0ce-71c0-481b-8a6f-14a6f07f3aa9): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:43:55 crc kubenswrapper[4754]: E0218 20:43:55.799741 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:44:01 crc kubenswrapper[4754]: E0218 20:44:01.212445 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:44:07 crc kubenswrapper[4754]: E0218 20:44:07.212461 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:44:08 crc kubenswrapper[4754]: I0218 20:44:08.222118 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:44:08 crc kubenswrapper[4754]: E0218 20:44:08.222696 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:44:08 crc kubenswrapper[4754]: E0218 20:44:08.224829 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:44:08 crc kubenswrapper[4754]: E0218 20:44:08.225646 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:44:08 crc kubenswrapper[4754]: E0218 20:44:08.227155 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:44:14 crc kubenswrapper[4754]: E0218 20:44:14.212654 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:44:19 crc kubenswrapper[4754]: I0218 20:44:19.209958 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:44:19 crc kubenswrapper[4754]: E0218 20:44:19.210816 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:44:20 crc kubenswrapper[4754]: E0218 20:44:20.214922 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:44:21 crc kubenswrapper[4754]: E0218 20:44:21.211933 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:44:22 crc kubenswrapper[4754]: E0218 20:44:22.213576 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:44:22 crc kubenswrapper[4754]: E0218 20:44:22.214774 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:44:26 crc kubenswrapper[4754]: E0218 20:44:26.214280 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:44:31 crc kubenswrapper[4754]: I0218 20:44:31.211124 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:44:31 crc kubenswrapper[4754]: E0218 20:44:31.212309 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:44:31 crc kubenswrapper[4754]: E0218 20:44:31.214403 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:44:34 crc kubenswrapper[4754]: E0218 20:44:34.212932 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:44:35 crc kubenswrapper[4754]: E0218 20:44:35.213694 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:44:36 crc kubenswrapper[4754]: E0218 20:44:36.213007 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:44:37 crc kubenswrapper[4754]: E0218 20:44:37.212180 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:44:43 crc kubenswrapper[4754]: I0218 20:44:43.211557 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:44:43 crc kubenswrapper[4754]: E0218 20:44:43.212457 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:44:44 crc kubenswrapper[4754]: E0218 20:44:44.212453 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:44:48 crc kubenswrapper[4754]: E0218 20:44:48.220943 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:44:48 crc kubenswrapper[4754]: E0218 20:44:48.221246 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:44:48 crc kubenswrapper[4754]: E0218 20:44:48.221667 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:44:48 crc kubenswrapper[4754]: I0218 20:44:48.222472 4754 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:44:49 crc kubenswrapper[4754]: E0218 20:44:49.048757 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:44:49 crc kubenswrapper[4754]: E0218 20:44:49.048970 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb8jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z5jpx_openshift-marketplace(970512fc-4847-4c9e-b049-af67e5001a91): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:44:49 crc kubenswrapper[4754]: E0218 20:44:49.050231 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:44:54 crc kubenswrapper[4754]: I0218 20:44:54.209788 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:44:54 crc kubenswrapper[4754]: E0218 20:44:54.210409 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:44:57 crc kubenswrapper[4754]: E0218 20:44:57.212871 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:44:59 crc kubenswrapper[4754]: E0218 20:44:59.211397 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.177557 4754 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl"] Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.180632 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.183705 4754 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.184759 4754 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.196088 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl"] Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.266880 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/434c432f-2d87-4b81-81f7-a86a51608531-config-volume\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.266930 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/434c432f-2d87-4b81-81f7-a86a51608531-secret-volume\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.266966 4754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnlq\" (UniqueName: \"kubernetes.io/projected/434c432f-2d87-4b81-81f7-a86a51608531-kube-api-access-dbnlq\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.370037 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/434c432f-2d87-4b81-81f7-a86a51608531-config-volume\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.370092 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/434c432f-2d87-4b81-81f7-a86a51608531-secret-volume\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.370126 4754 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnlq\" (UniqueName: \"kubernetes.io/projected/434c432f-2d87-4b81-81f7-a86a51608531-kube-api-access-dbnlq\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.371381 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/434c432f-2d87-4b81-81f7-a86a51608531-config-volume\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.376505 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/434c432f-2d87-4b81-81f7-a86a51608531-secret-volume\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.394789 4754 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnlq\" (UniqueName: \"kubernetes.io/projected/434c432f-2d87-4b81-81f7-a86a51608531-kube-api-access-dbnlq\") pod \"collect-profiles-29524125-57pnl\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:00 crc kubenswrapper[4754]: I0218 20:45:00.528459 4754 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:01 crc kubenswrapper[4754]: I0218 20:45:01.001067 4754 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl"] Feb 18 20:45:01 crc kubenswrapper[4754]: I0218 20:45:01.048994 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" event={"ID":"434c432f-2d87-4b81-81f7-a86a51608531","Type":"ContainerStarted","Data":"d6f081a7fec4ba0ee157fa492ed7b75b025f8f94e860dc6fea5729a9e02cc6ff"} Feb 18 20:45:02 crc kubenswrapper[4754]: I0218 20:45:02.060742 4754 generic.go:334] "Generic (PLEG): container finished" podID="434c432f-2d87-4b81-81f7-a86a51608531" containerID="3da08057f28e94da53a3faaaccea3966825d92ce4e460584bf884bb690d6c623" exitCode=0 Feb 18 20:45:02 crc kubenswrapper[4754]: I0218 20:45:02.060926 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" event={"ID":"434c432f-2d87-4b81-81f7-a86a51608531","Type":"ContainerDied","Data":"3da08057f28e94da53a3faaaccea3966825d92ce4e460584bf884bb690d6c623"} Feb 18 20:45:02 crc kubenswrapper[4754]: E0218 20:45:02.212568 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:45:03 crc kubenswrapper[4754]: E0218 20:45:03.212674 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:45:03 crc kubenswrapper[4754]: E0218 20:45:03.212856 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.527239 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.652574 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/434c432f-2d87-4b81-81f7-a86a51608531-secret-volume\") pod \"434c432f-2d87-4b81-81f7-a86a51608531\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.652638 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/434c432f-2d87-4b81-81f7-a86a51608531-config-volume\") pod \"434c432f-2d87-4b81-81f7-a86a51608531\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.652796 4754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbnlq\" (UniqueName: \"kubernetes.io/projected/434c432f-2d87-4b81-81f7-a86a51608531-kube-api-access-dbnlq\") pod \"434c432f-2d87-4b81-81f7-a86a51608531\" (UID: \"434c432f-2d87-4b81-81f7-a86a51608531\") " Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.653196 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434c432f-2d87-4b81-81f7-a86a51608531-config-volume" (OuterVolumeSpecName: "config-volume") pod "434c432f-2d87-4b81-81f7-a86a51608531" (UID: "434c432f-2d87-4b81-81f7-a86a51608531"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.653707 4754 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/434c432f-2d87-4b81-81f7-a86a51608531-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.659521 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434c432f-2d87-4b81-81f7-a86a51608531-kube-api-access-dbnlq" (OuterVolumeSpecName: "kube-api-access-dbnlq") pod "434c432f-2d87-4b81-81f7-a86a51608531" (UID: "434c432f-2d87-4b81-81f7-a86a51608531"). InnerVolumeSpecName "kube-api-access-dbnlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.660542 4754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434c432f-2d87-4b81-81f7-a86a51608531-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "434c432f-2d87-4b81-81f7-a86a51608531" (UID: "434c432f-2d87-4b81-81f7-a86a51608531"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.756101 4754 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/434c432f-2d87-4b81-81f7-a86a51608531-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:03 crc kubenswrapper[4754]: I0218 20:45:03.756209 4754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbnlq\" (UniqueName: \"kubernetes.io/projected/434c432f-2d87-4b81-81f7-a86a51608531-kube-api-access-dbnlq\") on node \"crc\" DevicePath \"\"" Feb 18 20:45:04 crc kubenswrapper[4754]: I0218 20:45:04.088494 4754 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" event={"ID":"434c432f-2d87-4b81-81f7-a86a51608531","Type":"ContainerDied","Data":"d6f081a7fec4ba0ee157fa492ed7b75b025f8f94e860dc6fea5729a9e02cc6ff"} Feb 18 20:45:04 crc kubenswrapper[4754]: I0218 20:45:04.088560 4754 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f081a7fec4ba0ee157fa492ed7b75b025f8f94e860dc6fea5729a9e02cc6ff" Feb 18 20:45:04 crc kubenswrapper[4754]: I0218 20:45:04.088590 4754 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524125-57pnl" Feb 18 20:45:04 crc kubenswrapper[4754]: I0218 20:45:04.649021 4754 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz"] Feb 18 20:45:04 crc kubenswrapper[4754]: I0218 20:45:04.662505 4754 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-f54lz"] Feb 18 20:45:05 crc kubenswrapper[4754]: I0218 20:45:05.209852 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:45:05 crc kubenswrapper[4754]: E0218 20:45:05.210328 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:45:06 crc kubenswrapper[4754]: I0218 20:45:06.230441 4754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb684bb-9943-4cd5-a4d4-208ee65d7adc" path="/var/lib/kubelet/pods/edb684bb-9943-4cd5-a4d4-208ee65d7adc/volumes" Feb 18 20:45:10 crc kubenswrapper[4754]: E0218 20:45:10.212643 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:45:13 crc kubenswrapper[4754]: E0218 20:45:13.212086 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99" Feb 18 20:45:14 crc kubenswrapper[4754]: E0218 20:45:14.214458 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z5jpx" podUID="970512fc-4847-4c9e-b049-af67e5001a91" Feb 18 20:45:15 crc kubenswrapper[4754]: E0218 20:45:15.211969 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rcnjr" podUID="e06d3dd8-6a39-44e5-9042-9673a602cab8" Feb 18 20:45:16 crc kubenswrapper[4754]: I0218 20:45:16.210948 4754 scope.go:117] "RemoveContainer" containerID="7dcaf6dbf83d82fb6b9bc876715d4c66fdf22ccf5e25c198ab4e1aff80fade15" Feb 18 20:45:16 crc kubenswrapper[4754]: E0218 20:45:16.211450 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wmjxr_openshift-machine-config-operator(5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-wmjxr" podUID="5ac1417e-a3ba-4ffa-9d4c-5d6c9dccd5f8" Feb 18 20:45:17 crc kubenswrapper[4754]: E0218 20:45:17.613506 4754 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:45:17 crc kubenswrapper[4754]: E0218 20:45:17.613894 4754 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hmqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(2bd39ede-50e1-4ecd-aeee-c4693ed571e4): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:45:17 crc kubenswrapper[4754]: E0218 20:45:17.615160 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="2bd39ede-50e1-4ecd-aeee-c4693ed571e4" Feb 18 20:45:22 crc kubenswrapper[4754]: E0218 20:45:22.214224 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-sx4rr" podUID="63eff0ce-71c0-481b-8a6f-14a6f07f3aa9" Feb 18 20:45:24 crc kubenswrapper[4754]: E0218 20:45:24.212781 4754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2p9z7" podUID="4ad178cd-bde2-49ee-9738-5fa2c7e06b99"